24HOP: Summit Preview 2016


I have the pleasure and honor not only to talk about of one of my favorite topics at the PASS Summit 2016 in Seattle but was also invited to give a sneak preview of this talk at the “24 Hours of PASS – Preview Edition“. As you can guess from the name of the event: It is featuring 24 consecutive one-hour webinars from upcoming PASS Summit 2016 sessions.


The title of my talk is “My Favorite Pie (Chart) – Simple Rules for Clear Visualizations“. As I will make clear very early in my talk: I have plenty of favorite pies – but do not really like any sort of pie chart. And as it is not enough to just criticize things you do not like and show disadvantages of – in this case – pie charts, I came up with easy-to-remember and easy-to-follow rules to generally improve visualizations. Those rules will help you to find out in which (rare) cases a pie chart would be the visualization of the choice and in which cases another (and which) type of visualization would be the better choice:

1.Use proper chart-type

2.Display as few information as possible, as much information as necessary

3.Encode accurately

4.Highlight important things

5.Calculate measures

Use proper chart-type

Many BI tools on the market offer plenty of chart-types. Unfortunately most of the tools are not very good in doing useful recommendations which chart to use on the data one is analyzing. On the other side, in my experience the “gut-feeling” of many users will not help very much in choosing the best visualization.

Therefore we will discuss the most common chart types – including “ordinary” tables, which have sometimes a bad reputation, despite to their usefulness in many  cases.

Display as few information as possible, as much information as necessary

The idea of visualizing data in form of tables or charts is to give insights to the report users. For this reason we should only show information on the screen which is necessary to achieve this goal. Unfortunately many tools available are not very good in coming up with useful defaults when creating a table or a chart.

Therefore we will discuss why we should reduce the “ink-factor” and watch out for proper scaling.

Encode accurately

Again: The idea of visualizing data is to give insights to the report users. These insights should be achieved intuitively – which can’t happen if the most important things are not shown in the most prominent way on the screen.

Therefore we will discuss sorting and the proper use of colors.

Highlight important things

We can help the report users tremendously when we not only show data, but give a hint about how good or bad the number actually is for the organization or him/her.

Therefore we will discuss how we can highlight those numbers where the user should be alarmed and set actions.

Calculate measures

I can’t remember how often I have seen people in offices in front of their high-end PC’s and laptops grabbing for an ordinary pocket calculator to compute a sum, a difference or a more or less complicated performance indicator from the numbers on their screen on this device. But I can remember that in any single case I was stunned and watched with my mouth open.

Therefore we will take some time to discuss how important it is to add calculated values in you visualization.

Call to Action

The talk will be online and seats are limited – so save the date (september 7th or 8th – depending on your time zone), be sure to register to “24 Hours of PASS – Preview Edition” as soon as possible and watch out for twitter tags #pass24hop and #sqlpass.

I am looking forward to have you in my talk!


Markus Ehrenmüller-Jensen



24HOP: Summit Preview 2016

Boosting DWH Performance with SQL Server 2016 at SQL Saturday Sofia 2015

I was very glad to return to Bulgaria within a couple of months to attend the SQL Saturday Sofia 2015 and beeing accepted for two talks. One of them covering how to use the improvements for ColumnStore technology in SQL Server 2016 to boost the performance of your DWH.

I was very excited when the xVelocity technology was introduced into both the relational database of SQL Server (as Non-Clustered ColumnStore Index) and Analysis Services (as Tabular model). As I was already stunned by the features available with the Power Pivot add-in (available since Excel 2010), this was a huge step, integrating a whole different way of storing direct into the heart of the database. (As opposed to the other vendors, which came up with vertical storage as well, but as another tool and not integrated in their core applications.)

SQL Server 2012 had unfortunately a lot of restrictions for the ColumnStore. The biggest of them are, that the index is available as a Non-Clustered Index only, and the the table becomes read-only.
SQL Server 2014 came up with lesser restrictions and introduced a Clustered ColumnStore Index, which is update-able, but letting the Non-Clustered Index still read-only.

For SQL Server 2016 we expect a whole bunch of improvements:

  • Less restrictions and performance-improvements to Non-Clustered & Clustered ColumnStore Indexes
  • Update-able Non-Clustered ColumnStore Index
  • Creating Non-Clustered B-tree Indexes on tables with Clustered ColumnStore Index
  • Creating ColumnStore Indexes on In-Memory OLTP tables

You can find my slide deck here.

Markus Ehrenmüller-Jensen
@MEhrenmueller & @SQLederhose

Boosting DWH Performance with SQL Server 2016 at SQL Saturday Sofia 2015

ColumnStore Index Best Practice at SQLdays

Last week I did two talks at SQLdays, Erding, Germany. One of them was about “ColumnStore Index Best Practice”. I basically set the frame by using a three-folded agenda:

  • ColumnStore
  • Non-Clustered ColumnStore Index
  • Clustered ColumnStore Index

It is important to understand the basic concepts of a ColumnStore, as opposed to the way data in a relational database was stored over the last decades. The latter is now called RowStore and didn’t have a special name until ColumnStore was introduced, because it was the one and only way how to store data.

After getting behind the “magic” of a ColumnStore it is much more easier to evaluate the use cases, where it is appropriate, and what to consider when maintaining this special kind of “index”. (Even the definition of the word “index” gets a new point of view, as rows in a ColumnStore are not sorted – which is conversed to a b-tree index.)

Non-Clustered ColumnStore Index was introduced in SQL Server 2012 and came with a whole list of limitations. One of the most discussed limitation was, that the table for which the Non-Clustered ColumnStore Index was created, becomes read-only.

Clustered ColumnStore Index was introduced in SQL Server 2014 and came with a better performance and a shorter list of limitations. One of the limitations we got rid of was, that the table for which a Clustered ColumnStore Index was created, was still updateable. (But in SQL 2014 tables with a Non-Clustered ColumnStore Index is still read-only).

You can find my slide deck here.

Markus Ehrenmüller-Jensen
@MEhrenmueller & @SQLederhose

ColumnStore Index Best Practice at SQLdays

Microsoft’s Business Intelligence Roadmap

Everybody expects that Microsoft will do great announcements during PASS Summit 2015 concerning their data platform. Even when there already have been a lot of announcements, previews and updates for the Business Intelligence suite in the weeks and months prior to the summit, the content of the keynote and Microsoft’s sessions during the conference exceeded my expectations in a very positive manner.

The Box is Back

“Cloud first” changed Microsoft’s release policy to develop and deploy new features first to the cloud and deliver them for on-prem (“box”) in later releases. This lead to a time gap and annoyed customers who would not use the cloud for one reason or another. With the features shown in various sessions at PASS Summit 2015, with the available features in CTP3 and with the announcements for SQL Server 2016 RTM you can clearly see: The box is back! New features are delivered every month – new Power BI visualisations (s. below) will be created even once a week.


When I think back to the sort-of-roadmap which was announced 2009 I dont’t get any positive feelings. The promise in this year was, to conoslidate the tools to only three: Excel & SharePoint as the front-end and SQL Server as the back-bone. But instead of consolidating the tools of the Business Intelligence stack which already consisted of a bunch of tools (Microsoft Office Excel, SharePoint Excel Services, SQL Server PowerPivot for Excel, SQL Server PowerPivot for SharePoint, SQL Server Reporting Services in native mode, SQL Server Reporting Services in SharePoint mode, ProClarity *sigh*, SharePoint PerformancePoint Services, SharePoint Power View) new tools have been introduced and/or acquired: new add-ins for Microsoft Office Excel: Power View, Power Map, Power Query two new cloud-experiences Office365 and PowerBI.com, Power BI desktop and DataZen. That’s not what I call “consolidating”.

Things changed for my at the PASS Summit 2015 for a better. After years we have now a very clear roadmap for the Business Intelligence tools. Every of the existing tools gets its own dedicated place, compagnioning each other:

  • Power BI Desktop for interactive reports
  • Excel for spreadsheets
  • Report Builder for pixel-perfect paginated reports
  • DataZen for mobile reports

SQL Service Reporting Services is the worlds succesful tool for building paginated reports and operational reports. It will be fully integrated with PowerBI by end of 2015 in a hybrid way. Reporting Services will duplicate the features of off-prem PowerBI for on-prem uses. Excel still has its important place within the BI stack and will fully integrate in PowerBI/Reporting Services. And DataZen will be integrated into PowerBI and Reporting Services to enhance the mobile experience.

The following features are not available in CTP3, but announced for “very soon” (which might mean by end of 2015):

  • Pin entire PowerBI Reports on a dashboard (including filters)
  • Upload & embedd Excel workbooks as a report onto Power BI and pin parts of the workbook to a dashboard
  • Publish PowerBI Desktop Reports to an on-prem Reporting Server
  • Publish DataZen Reports to an on-prem Reporting Server

End-user BI

Here is it, the new buzz word! After enabling Corporate BI and making Self-Service BI possible, we now speak of “End-user BI”. This means, that we make data available directly for the end-user. Microsofts tool for this is, of course, Power BI.

By end of october 2015 already 90.000 organizations in over 180 countries subscribed to PowerBI.com, which is an impressive success story.

Get Insights

As nobody of us digs data, but wants to gain insights (s. my lightning talk on “My Favorite Candy Bar (Chart)” at PASS Summit 2015), a new feature in PowerBI is very welcomed: “Get Insights”. It will analyze a data set to detect correlations, outliers, low and high values you might not have discovered on your own.

Bring Your Own Device (BYOD)

Through the tools acquired with DataZen, Business Intelligence goes mobile on widespread types of devices. Windows (Phone) is now not the only platform supported, but one among different ones: Mac OS, iOS & Android.

Power BI Enterprise Gateway

The Power BI Enterprise Gateway enables Power BI, which is a cloud service, to connect to your on-prem data. Combined with a live data source you can analyze your on-prem data without moving the data out to the cloud – which will help to get up-to-date analysis or even real-time reports.

Real-time Cubes

To have the possibility to analyse data immediately after it was generated, is a nice idea. Unfortunately, typical DWH scenarios are far away from this: Data from different sources is first extracted into a staging area, transformed and then loaded into a relational data warehouse. The process is usually done through night or on weekends. Because relational tables can be inconvenient to query for the average end-user (for either performance reasons or usability of table- and column-names, or both), many companies build up a cube through SQL Server Analysis Services on top of that, which adds up another step (and leads to even more time lag).

Fortunately two different features are helping out here: Using ColumnStore indexes for relational tables usually is speeding up queries on those tables and SQL Server Analysis Services is capable of both real-time online analytic processing (ROLAP in muldi-dimensional models) and DirectQuery (in tabular models). The performance of those queries will be improved in SQL Server 2016, while it did not have a very useful performance in previous versions (due to inefficient SQL statements generated and lack of any caching).

DAX improvements

The editor will have intelli-sense, syntax highlighting, allow to edit in multiple lines and intend text, and will allow comments. The language will be enhanced with 50 new functions and allow the use of variables. New type of relationships will be allowed (eg. 1:m to m:1 relationship) including bidirectional filtering through multiple tables.

Microsoft goes Open Source with Power BI Visualisations

Microsoft announced in October to open up the visualisation stack for “custom visualisations” and actually ran a competition to submit new visualisations for Power BI. Those new types of visualisations integrate perfectly with the existing ones and with each other, so you can eg. cross filter like you would expect. All the visualisations are available in both, Power BI Desktop and the PowerBI web site (visit visuals.powerbi.com).

Reporting Services

The last big investements into Reporting Services was with SQL Server 2008 R2, which is a couple of years ago. With SQL 2016 Reporting Services is back again. Report Manager will be totally overworked as a responsive webpage and will be based in HTML5. The first looks I had at PASS Summit 2015 have been very promising.


Yours sincerly,
Markus Ehrenmueller

Microsoft’s Business Intelligence Roadmap

Business Intelligence Power Hour at BASTA

After not haven spoken at BASTA! in 2014 I was again at BASTA! 2015, speaking about PowerBI.

I did talk about the following topics, explaining each of them by just one slide and using most of the time in demos:

  • Power Query
  • Power Pivot
  • Power View
  • Power Map
  • PowerBI.com
  • Power Q&A
  • Power BI App
  • Power BI Desktop

You can download my slide deck here.

Markus Ehrenmüller-Jensen
@MEhrenmueller & @SQLederhose

Business Intelligence Power Hour at BASTA

My Favorite Candy Bar (Chart) at PASS Summit

I am really proud that my abstract has been accepted by the abstract review team for PASS Summit 2015 in Seattle. Therefore I will speak for the 3rd time at the PASS Summit. I do appreciate, that I am not only attending THE conference dedicated to SQL Server worldwide, but also being able to be part of a conference I am a huge fan of, since I attended the first time in 2011.

Like last year, I will have again “10 minutes of fame” 🙂 during a Lightning Talk session (during a 75 minutes General Session slot together with 5 other speakers which is dedicated to datavisualisation). This year I will talk about “My Favorite Candy Bar (Chart)“, which will be a sequel zu last years session with title “My Favorite Pie (Chart)”.

Last year I took a sample Pie Chart which did show values for 8 different products. Even when I told the audience, that 7 values where identically, but one had a bigger value, nobody could identify the product with the biggest value (which was 10% (!) bigger than the others). I think that showed very clearly, that in this case a Pie Chart was the wrong type of visualisation.
I ended up converting the Pie Chart into a column chart which made the difference very clear.

I wont reveal all details to this year’s session, but I can tell you so much:
This time I will show a Bar Chart, which will improve hugely by converting it into Pie Chart – no, I am not kidding.
If you are curious about how I could change my mind since last year, just come into my session! 🙂

Markus Ehrenmüller-Jensen
@MEhrenmueller & @SQLederhose

My Favorite Candy Bar (Chart) at PASS Summit

Introduction to SQL Azure Databases at SQLdays

This week I did two talks at SQLdays, Erding, Germany. One of them was an “Introduction to SQL Azure Databases”. As it was an introductionary level 200 talk, I had lot of slides to explain the concepts and the use cases pro & contra cloud computing, but also prepared some quick demos to show how to

  • subscribe to Azure,
  • create a server,
  • create a database, and last-but-not-least,
  • manage a database

The aim was, that the attendees could get an impression, how putting a database in Microsoft’s cloud offering would “feel” alike.

You can finde my slidedeck here.

Unfortunately through my demos I had to discover that I could not find the link to “Azure Online SQL Database Management”, which I was sure was there the other day, when I rehearsed my demos.

Doing a proper research I found the solution:
When I upgraded my server from V11 to V12 (to get the new (preview) features like Auditing, Dynamic data masking, Transparent data encryption, …) I unintentionally disabled the SQL Database Management, which is a deprecated feature and not available for databases in V12.

Personally I feel really bad about the fact, that SQL Database Mangement is abandoned in the new version. On the one hand I can understand, that resources even in a big company like Microsoft are not unlimited, but on the other hand being able to manage your database objects through the browser without any need to install tools locally, was really helpful. From V12 on we have to stick to tools installed on-premises on your client to maintain tables, views, indexes, and the like.

Markus Ehrenmüller-Jensen
@MEhrenmueller & @SQLederhose

Introduction to SQL Azure Databases at SQLdays