Using Excel & PowerShell to organize and deploy DAX measures

Organizing and keeping track of your DAX measures in a tabular model can be a challenging task, especially when working with larger Analysis Services projects. Visual Studio does not give you much help in organizing them and it is easy to end up with a messy model where code is duplicated among measures.

Excel and PowerShell to the rescue!

An alternative approach from keeping your DAX measures within your Analysis Services project would be to store them in well-structured Excel document(s) instead. Such an Excel document could be structured in the following way:


The following metadata (columns) are used to describe the DAX measure:

  • MeasureName – the name of the measure.
  • DAXFormula – the full DAX expression that is used to calculate the measure.
  • Format – the formatting formula that should be applied to the measure (if any).
  • VirtualTable – the name of the table (a virtual table that should not exist in the model) that the measure should be placed in. Note that all measures are placed in virtual tables, not physical ones already existing in the model.
  • Folder – the name of the display folder (if any) within the virtual table.
  • Hidden – a flag indicating whether the measure should be hidden or not.
  • DAXTemplate – a place to create reusable DAX expressions templates that can be applied to the measure. More on this later.
  • Description – a user friendly description of the measure (used as the description property in the model).

By using this approach, instead of placing the measures directly in the model, you will get a much better overview and organization of your DAX measures. The only drawback is that you will not get DAX intellisense into Excel (you will probably use Power BI to write the initial DAX and copy/paste it into the Excel document).

So, now that you have organized your measures – how to you deploy them to your model?

This is where PowerShell comes into play.

The basic approach for the PowerShell script to deploy the contents of the Excel file(s) are:

  • Read the measure metadata from all XLSX files in the directory.
  • Create a TMSL script from the contents in (1).
  • Deploy the TMSL script to the Analysis Services instance (using Delete/Create approach).

What about the DAXTemplate property?

Since we are using PowerShell we can apply any type of logic to the DAX expressions. The DAXTemplate property is such an example. The DAXTemplate property can be used to apply a reusable piece of DAX code on your measures. Each DAXTemplate correspond to a PowerShell function that contains code to manipulate the DAX expression. Here is an example (just for demonstration purpose):


The above DAXTemplate is called TimeRange and is used to make it possible to toggle between YTD and LastMonth for any measure in the model, simply by adding TimeRange as DAXTemplate in the Excel sheet. This approach significantly reduces the amount of duplicated code.

The code

The solution to manage DAX measures using this approach consists of the following ps1-files:

  • Excel2TMSL.ps1 –main file that does the work. This is the file to be executed.
  • Excel2TMSL.include.ps1 –file that holds the DAXTemplate functions.
  • Excel2TMSL.Functions.include.ps1 – file that holds helper functions, called from Excel2TMSL.ps1.

The full source code can be found here (with sample XLSX files):

Power BI Year in Review Contest 2017: Trump’s 2017 in Tweets

This is a contribution to the Power BI Year in Review Contest 2017. The report will give you a wide range of insights regarding President Trump’s tweeting behavior throughout the year. The main data source for the report is from Sentiment analysis and key phrase extraction has been done on the tweet text using Azure Cognitive Services. The result is embedded below. Give it a thumbs up at Power BI Data Stories Gallery.

Power BI Self-Service in Practice

Note! Since this post was written Power BI Apps has been introduced. The approach is similar, but there are differences. Watch out for a upcoming post on the subject.

When Power BI is introduced in an organization as self-service platform, a number of questions arise on how to organize content in the best way. The challenge is to provide a framework that clearly differentiate between IT-governed and self-serviced reports. At the same time, we need to provide our end-users with the cleanest and easiest to use structure.


In Power BI, we have the two central concepts – workspaces and content packs. However, it is not always obvious how we should utilize them in the most structured way.

The following illustration shows my best-practice setup for self-service using content packs and workspaces in Power BI.



In most cases we are working with four different actors:

  • Consumer – only consuming information from reports and dashboards.
    Typically, 70-90% of the users.
  • Self-Service User – creating reports for consumers to use.
    Typically, 10-20% of the users
  • Data Scientists – creating new data models and introducing new data entities, for themselves or others to use.
    Typically, 0-5% of the users.
  • Developer – responsible for the whole analytics architecture, including the common analysis models and IT-governed reports.


You need to have at least one developer workspace. The purpose of this workspace is to structure and manage IT-managed content packs.

Furthermore, you typically create different workspaces for different information areas (finance, hr, sales, etc.). In order to differentiate between consumers and self-service users you need to create a separate self-service workspace for each information area. When new reports are ready in the self-service workspace they can be updated to the consumer workspace via a content pack.

Content Packs

There are two types of content packs – the ones that are managed by IT (A4) and the ones that are created using self-service (D). These content packs are updated individually and the responsibility for them are manage by different actors.

Detailed description of each step

(A1) The developer is creating reports in Power BI Desktop. In order to store the reports in some kind of source control repository, Power BI Desktop is currently a must. Power BI Desktop files is also key for being able to publish reports to different environments (dev/test/prod).

(A2) The Power BI Desktop file is published to the developer workspace.

(A3) The developer creates content packs from one or many published Power BI Desktop reports (and possibly dashboards).

(A4) The IT-managed content packs are made available for others to use.

(B1) The Data Scientist creates his own data model and report in Power BI Desktop.

(B2) The model and report is published to the self-service workspace.

(C) The Self-Service user is creating reports from existing data sources.

(D) The reports (and possibly data models) are made available in content packs for others to use.

(E) The published content packs are instantiated on consumer workspaces.

(F) The consumer uses the reports (coming from different content packs) in group workspaces or my workspace.


This structure is the way I have found being most efficient in managing IT-governed and self-service content in Power BI. It also provides the largest end-user group (consumers) with the cleanest workspaces.

Let me know what you think. How are you managing content in Power BI?

AutoSPFilePublisher – a SharePoint developer tool

The short story

AutoSPFilePublisher is a SharePoint tools that synchronizes local files with SharePoint libraries. It makes it possible to work with source controlled SharePoint files locally and automatically keep site collections libraries in sync with the local files. The target group for AutoSPFilePublisher is SharePoint developers.



Download a preview version of AutoSPFilePublisher here.

Use the comment section to provide feedback.

For the long story, continue reading.



SharePoint development involves a lot of creation and modification of SharePoint “core files”, such as files in a site collection’s Master Page Gallery. Most developers use one of the following three methods when adding or modifying SharePoint core files:

  1. SharePoint Designer.
  2. Explorer View, using Internet Explorer.
  3. PowerShell upload and publishing script.

There are certainly pros and cons with each of the above methods. Let me summarize my experience of each method:

SharePoint Designer and Explorer View (very similar, thus treated the same)

+ Quick. It is an easy and integrated user experience to get direct access to a single file and update it.
No source control support. The file is always edited in SharePoint, which means you will have to download it to source control it
You have to publish the file manually, which in some cases are good but in most cases a source of error.
You must manually edit each instance of the file in all site collections it is used in.


+ Source control support. You have your source code files in the source code repository and upload/publish them to SharePoint from there.
+ Publishing and uploading the file can be done in one single step to avoid unpublished core files.
+ Publishing/uploading can be done to multiple site collections since its all controlled by the script.
Slow and manual. You need to manually start the PowerShell script each time you need to upload/publish files to SharePoint.
Script maintenance. You need to create and maintain the PowerShell scripts that takes care of the upload/publish process for you.


The SharePoint Designer/Explorer View way is very appealing thanks to its quick and easy way to use, whilst the PowerShell way is much more robust and is preferred as soon as you put your source code in a repository.


The solution

I want a tool that is both quick and robust. Since none of the above methods supports that, I decided you create a tool called AutoSPFilePublisher.


The purpose of AutoSPFilePublisher is to automatically keep local files in sync (one-way only) with SharePoint libraries. AutoSPFilePublisher will automatically recognize if a file is changed locally and take care of the uploading and publishing for you.



This is what the main interface looks like. As soon as you click Start it will keep track of changes in all active configurations. If you want to add a bunch of files manually you can use the Add button (note that it will take long time to upload a lot of files).


This is what the configuration page for each SharePoint library looks like.


Please give the application a try, but note that it is a PREVIEW application. I take no responsibility for potential bugs. I recommend you use it in a test environment.

Download a preview version of AutoSPFilePublisher here.

The data behind Gartner’s BI Magic Quadrant visualized in Power BI

Since I published the post Gartner’s BI Magic Quadrant visualized in Power BI I’ve  gotten quite a lot of requests to share the data the Power BI report is based on. In this post I will do just that.

In the end of this post there is a link to an Excel file that can be downloaded that includes all classifications needed to create the report yourself.


The Excel file is constructed by collecting magic quadrant images published by Gartner, and converting all BI vendors position in the charts into two percentages numbers – one for the execute- and one for the vision dimension. Please be aware that there might be a small difference in this data compared to Gartner’s original images. For most accurate data, always use Gartner’s original publications.

Download Excel file: Gartner-BI-2008-2016

KPI Indicator custom visual for Power BI explained

Since the creation of the KPI Indicator custom visual (originally submitted to the Power BI Custom Visual Contest) I have been getting a lot of feedback and support from users all around. Therefore, I thought it would be appropriate to give a more extensive description of the different properties that can be customized in this visual.

What you first need to consider is the type of banding you want for your KPI. The banding type describes the way the KPI’s actual value is compared to its target value. Take a look at the following illustration of the three different banding types available:


Banding type “Increasing is better” is used when the KPI is considered better the higher the value is (e.g. Gross Profit). Banding type “Decreasing is better” is used when the KPI is considered better the lower the value is (e.g. Costs). Banding type “Closer is better” is used when the KPI should be close to a target and its equally bad to be far from it, no matter if it’s on the positive or the negative side (e.g. Number of employees).

When customizing the properties for the KPI Indicator the formatting panel looks like this:


Here comes a description of the different properties:

KPI name
This is the name of the KPI Indicator. It is displayed at the top of the visualization. By default (or if left blank), it uses the name of the measure that is used as actual value.

Banding percentage
If the Banding comparison property is set to “Relative” this value should be entered as a percentage number (of the target). If the Banding comparison property is set to “Absolute” this value should be entered as a non-percentage number (that will be added/subtracted from the target instead).

Banding type
The banding types are one of the three banding types described in the illustration above.

Banding comparison
The Banding comparison can either be “Relative” or “Absolute”. If this property is set to “Relative” the banding percentage is calculated as a percentage of the target. If this property is set to “Absolute” the banding percentage is calculated as a subtraction to/from the target.

Chart type
This property decides whether the trend indication should be a line chart or a bar chart.

Deviation as %
The Deviation as % is On by default. This means that the Actual value compared to the Target value will be presented as a percentage deviation. The deviation is presented to the right of the actual value. If this property is switched off the deviation will not be treated as percentage. It is typically useful when Banding comparison is set to “Absolute” and the actual value is not measured in %.

Thousands separator
This property can be used to force the formatting of the actual value to be formatted with thousands separators. This is typically useful when working with larger non-percentage measures.

Basically, that’s the stuff you need to know to fully utilize the KPI Indicator custom visual for Power BI.