SQLCAT: SQL Server 2016 CollumnStore Customer Scenarios and Best Practices (SQL PASS Summit 2015)

Improve loading a ColumnStore in SQL Server 2016 CTP3 by using the following guidelines:

  • Load into stage-tables per partition using WITH (TABLOCK) and switch those tables in afterwards
  • Separate your INSERT and UPDATE statements
  • For initial load create an empty table with Clustered ColumnStore on it (instead of loading a heap and create the index afterwards)
  • Include the parititioning column in the unique index/primary key (as you would with partitioned heaps)
  • Create statistics immmediately after initial data load (auto stats improvements are on their way, but not implemented yet)
  • Utilize the new possibility to combine Clustered ColumnStore Indexes with b-tree Non-Clustered Indexes where apropriate
  • Clustered ColumnStore Index with a b-tree index on, slows down the load (therefore create the non-clustered b-tree after load; CTP3 allows for parallel non-clustered index creation)
  • REBUILD of ColumnStore is not an online operation, therefore apply it on partition level only
  • Try trace flag 9481 to force the old cardinality estimator for performance-outliers
  • SQL Server Integration Services can lead to trimmed row groups
  • Set AutoAdjustBufferSize on in SQL Server Integration Services to avoid small uncompressed delta stores
  • Use compatibility level 130 when testing ColumnStore
Advertisements
SQLCAT: SQL Server 2016 CollumnStore Customer Scenarios and Best Practices (SQL PASS Summit 2015)

SQL PASS Summit 2015 Keynote #1 (Wednesday)

In 2015 the PASS Summit is the 16th annual meeting of data platform professionals, gathering 5500 total registrations.

PASS President Tom LaRock remembered everybody to welcome #sqlfamily members with a #sqlhug.

Keynote speaker was Joseph Sirosh, Corporte Vice Precident, Data Group.

Analog data will disappear, and digital data got the majority. In short cloud/internet connected data will be the majority. We moved from an age of hardware to an age of software and heading towards an age of data (online recommendations, customer experience, …). Microsofts tool for this is “Cortana Analytics Suite”.

“We are all big data” through 2GB of genomic data.

Eric Fleischman, Chief Architect andVP Platform Engineering, DocuSign, explains how they can cope with the increasing growth of the company, and the data coming with that. Their decision was against open source, as the want to “use” a database system and not “write” a database system.

As averagely two documents are signed every second their system generating 180 Mio events a day.

Engines of data: mission cricital OLTP, high-performance DW, end-to-end mobile BI, with advanced analytics on top of it.

Most of the vendors built their systems (in slow cycles) and ship it afterwards to the cloud. Microsoft is the only company, who builds everything for the cloud and ship it later on-prem. Gartner rated Microsoft as a leader in completeness of vision and completeness of execution.

Shawn Bice, General Manager, Database Systems Group, tells that companies who are able to embrace the data are far more succesful. Big bats:

  • everything is built-in (no add-ins or so)
  • Mission critical OLTP
  • Most secure database: least vulnerable database the past 6 years in a row
  • highest performing data warehouse: won against the other vendors
  • Ent-to-end MobileBI on any device: a fraction of the cost comp (USD 120 against Tableau (USD 480) or Oracle (USD 2230), self-service BI per user)
  • In-database Advanced Analytics: R + in-memory direct to the platofrm
  • in-memory across all workloads
  • consistent experience across cloud and on-prem

Learnings from the experiences with Azure went into SQL 2016 on-prem.

Polybase removes the complexity of big data by enabling T-SQL over Hadoop by providing “external tables” within SQL Server. JSON support will also help a lot of projects.

Real-time is learn and adjust as things are happening. ColumnStore Indexes on top of in-memory (Hekaton) tables will enable this. Combined with embedded R Services the data is accessable to data scientist, without moving the data, but analysing it where it already is.

Rohan Kumar, Partner Director, Engineering shows a showcase with customer “p:cubed”. on a machine with impressive 480 logical processors. Monitors a huge amount of transaction, calculating customer rewards, both in real-time.

Non-Clustered Index will be updateable with SQL 2016. On top of an im-memory table this index will not sacrifice the performance of the oltp-table.

Advanced Analytics enables to include R-scripts within T-SQL code without moving data outside of the data platform and therefore enableing real-time analyitcs. As oltp-data lies in-memory, the data does not even touch the disk from the time it is tracked until it is analyzed.

“Always Encrypted” guarantees, that the encryption key and the deciphered text is available on the client only. The content is never stored in a decrypted way on the server (neither on disk, nor in the buffer pool memory).

Stretch Database allows to combine hot and cold data in one logical table, but actually moves the cold data out to cheaper disks, instead leaving it on the expensive storage, designed for hot data. It will still be queryable in the common way, as the technology is completely transparent to the client. And it works together with “Always Encrypted”.

The stretched part will show up as a “Remote Query” in the execution plan.

Joseph Sirosh: “Our industry does not respect tradition, but does respect innovation.”

Cloud helps to transform hardware to software, software to services and data to intelligence.

Sincerly yours,
Markus Ehrenmüller-Jensen

SQL PASS Summit 2015 Keynote #1 (Wednesday)

Business Intelligence Power Hour at BASTA

After not haven spoken at BASTA! in 2014 I was again at BASTA! 2015, speaking about PowerBI.

I did talk about the following topics, explaining each of them by just one slide and using most of the time in demos:

  • Power Query
  • Power Pivot
  • Power View
  • Power Map
  • PowerBI.com
  • Power Q&A
  • Power BI App
  • Power BI Desktop

You can download my slide deck here.

Sincerly,
Markus Ehrenmüller-Jensen
@MEhrenmueller & @SQLederhose
markus.ehrenmueller@gmail.com

Business Intelligence Power Hour at BASTA

My Favorite Candy Bar (Chart) at PASS Summit

I am really proud that my abstract has been accepted by the abstract review team for PASS Summit 2015 in Seattle. Therefore I will speak for the 3rd time at the PASS Summit. I do appreciate, that I am not only attending THE conference dedicated to SQL Server worldwide, but also being able to be part of a conference I am a huge fan of, since I attended the first time in 2011.

Like last year, I will have again “10 minutes of fame” 🙂 during a Lightning Talk session (during a 75 minutes General Session slot together with 5 other speakers which is dedicated to datavisualisation). This year I will talk about “My Favorite Candy Bar (Chart)“, which will be a sequel zu last years session with title “My Favorite Pie (Chart)”.

Last year I took a sample Pie Chart which did show values for 8 different products. Even when I told the audience, that 7 values where identically, but one had a bigger value, nobody could identify the product with the biggest value (which was 10% (!) bigger than the others). I think that showed very clearly, that in this case a Pie Chart was the wrong type of visualisation.
I ended up converting the Pie Chart into a column chart which made the difference very clear.

I wont reveal all details to this year’s session, but I can tell you so much:
This time I will show a Bar Chart, which will improve hugely by converting it into Pie Chart – no, I am not kidding.
If you are curious about how I could change my mind since last year, just come into my session! 🙂

Sincerly,
Markus Ehrenmüller-Jensen
@MEhrenmueller & @SQLederhose

My Favorite Candy Bar (Chart) at PASS Summit

Introduction to SQL Azure Databases at SQLdays

This week I did two talks at SQLdays, Erding, Germany. One of them was an “Introduction to SQL Azure Databases”. As it was an introductionary level 200 talk, I had lot of slides to explain the concepts and the use cases pro & contra cloud computing, but also prepared some quick demos to show how to

  • subscribe to Azure,
  • create a server,
  • create a database, and last-but-not-least,
  • manage a database

The aim was, that the attendees could get an impression, how putting a database in Microsoft’s cloud offering would “feel” alike.

You can finde my slidedeck here.

Unfortunately through my demos I had to discover that I could not find the link to “Azure Online SQL Database Management”, which I was sure was there the other day, when I rehearsed my demos.

Doing a proper research I found the solution:
When I upgraded my server from V11 to V12 (to get the new (preview) features like Auditing, Dynamic data masking, Transparent data encryption, …) I unintentionally disabled the SQL Database Management, which is a deprecated feature and not available for databases in V12.

Personally I feel really bad about the fact, that SQL Database Mangement is abandoned in the new version. On the one hand I can understand, that resources even in a big company like Microsoft are not unlimited, but on the other hand being able to manage your database objects through the browser without any need to install tools locally, was really helpful. From V12 on we have to stick to tools installed on-premises on your client to maintain tables, views, indexes, and the like.

Sincerly,
Markus Ehrenmüller-Jensen
@MEhrenmueller & @SQLederhose

Introduction to SQL Azure Databases at SQLdays