Skip to content

Happy together paths to test semantic models in Microsoft Fabric feature workspaces with BPA

Reading Time: 8 minutes

In this post I want to cover two happy together paths to test semantic models in Microsoft Fabric feature workspaces with BPA. Before you deploy updates to your development environment.

Happy together paths to test semantic models in Microsoft Fabric feature workspaces with BPA
Happy together paths

To clarify, when I say BPA I mean the Best Practices Analyzer functionality that you can work with to check that your semantic models are optimized and meet best practices. In this post I often refer to Best Practices Analyzer as BPA.

Best Practices Analyzer (BPA) was initially introduced in Tabular Editor 2. However, you can now also work with BPA directly in Microsoft Fabric notebooks. Due to the fact that BPA is part of the Semantic Link Library, which is a Python library which extends the capabilities of Semantic Link.

I highly recommend choosing one of these happy paths if you are working with the development process that is covered in the Microsoft article about CI/CD options. Because they both can bring various benefits. Some of which I cover in this post.

I split details about the happy paths into the below four sections.

Microsoft Fabric Git integration needs to be configured to work with either of the paths covered in this post. Plus, a Microsoft Fabric workspace that represents the feature created via the branch to a new workspace functionality.

If you need help with any jargon used in this post, then I recommend that you read my Microsoft Fabric Git integration jargon guide.

Why test semantic models with BPA in a feature environment?

When you intend to perform CI/CD it is vital to implement good development practices. One of which is shifting left to ensure that your testing to be as early as possible in your development process.

In the context of developing semantic models, it allows you to test you updates in your own feature environment first. Before you decide to deploy to other environments such as development, test and production.

Doing this means that if you make a change that can cause an issue in your feature workspace you can catch the issue early in the process and avoid any breaking changes affecting other environments.

Providing you with the easy options to either revoke your change or remove the feature branch and start again.

Whereas if you do not test the change and you release to your development workspace anybody else working with that workspace is affected. Potentially requiring more time and effort to resolve.

To help visualize this, the below diagram shows how shifting left can help prevent a breaking change that has taken place in a feature workspace.

Shifting lift prevents breaking changes affecting other workspaces in Microsoft Fabric
Shifting lift prevents breaking changes affecting other workspaces

Anyway, after various tests I came up with two happy paths that you can take to provide the best test options for Best Practices Analyzer possible.

Taking one of these happy paths improves the coverage of your testing in a feature workspace. Since you are doing both manual and automated checks and both can be customized.

In the next two sections I cover two different manual approaches you can work with. Followed by my recommended way to automate working with BPA.

Path one – Working with BPA interactively in Tabular Editor

First path is to work interactively with a version of Tabular Editor on your machine whilst working on semantic models.

Happy together path one to test semantic models in Microsoft Fabric feature workspaces with BPA by working with Tabular Editor interactively before committing change to Git
Working with Tabular Editor interactively before committing change to Git

Which is something a lot of Power BI developers have already been doing for many years. Due to the fact that Tabular Editor is a popular tool that contains a rich with features.

You can run your tests against the local copy of the semantic model if you so wish. Which you tend to do if you work on a Power BI report locally. Ideally a Power BI report that is saved as a Power BI Desktop Project in Power BI Desktop. To help with your CI/CD story.

Alternatively, you can connect to the semantic model stored in the Microsoft Fabric workspace directly with Tabular Editor.

Anyway, once you have checked with BPA and are happy with your changes you can commit them in the source control functionality in your Fabric workspace. Which commits the changes to your feature branch in the Git repository.

From there, you can go into Azure DevOps and initiate a pull request to your development branch. Initiating the automated tests after committing a change.

Advantages of working with Tabular Editor

One of the main advantages of working with BPA inside Tabular Editor is that it is an application that a lot of Power BI developers are familiar with. Due to the fact that it allows them to work with more advanced tasks when working with semantic models.

You can find out more about working with BPA in Tabular Editor by watching the video that covers best practice rules for Tabular Editor’s Best Practice Analyzer. Presented by Michael Kovalsky.

Path 2 – Checking BPA interactively with Semantic Link

Second path includes checking the quality of your semantic models interactively within a notebook. By running code inside your notebook that references the Semantic Link Library that is a part of Semantic Link.

Happy together path two to test semantic models in Microsoft Fabric feature workspaces with BPA by checking semantic models interactively in a notebook before committing change to Git
Checking semantic models interactively in a notebook before committing change to Git

You can run sempy_labs.run_model_bpa module in a cell to check each individual semantic model. However, I recommend that you run sempy_labs.run_model_bpa_bulk instead and test all the semantic models at once.

Working with sempy_labs.run_model_bpa_bulk allows you to easily store all your results in a Lakehouse table.

What you can also do is get everybody to work with a notebook stored in a centralized workspace. In order store the results from various workspaces in one centralized table.

Keeping a centralized table allows you to keep a history for all of the developers. However, bear in mind that it does mean that all the developers will need access to that centralized workspace as well.

Anyway, once you are happy with your changes you can commit them in source control. Which commits the changes to your feature branch in the Git repository.

From there, you can go into Azure DevOps and initiate a pull request to your development branch. Initiating the automated tests after committing a change.

Advantage of working in a notebook

One advantage of working in a notebook when looking to test semantic models in Microsoft Fabric feature workspaces is that you can extend the code that you develop. For example, you can enrich it with additional data.

Plus, you can visualize your results as well. Which I cover in a previous post about places you can visualize data in Microsoft Fabric.

You can find out more about working with Semantic Link Labs by watching the video that covers optimizing and automating Fabric scenarios with Semantic Link Labs. Which also happens to be presented by Michael Kovalsky.

Automated tests after committing the change

Once you have committed your changes with either of the two mentioned paths you can look to perform automated tests.

One way to perform automated tests on your semantic models is by implementing Continuous Integration using the Tabular Editor CLI. Which is well documented in the Power BI Project (PBIP) and Azure DevOps build pipelines for validation article by Microsoft.

Since it is well documented in the article, below is an overview of how to implement in in this scenario.

  • First you create the pipeline in Azure DevOps, ensuring a version of the pipeline is in your development branch.
  • You then configure a branch policy on your development branch.

Once done, whenever you create a pull request from your feature branch to your development branch in Azure DevOps the pipeline starts. Which does the following by default:

  • First the pipeline downloads the Tabular Editor CLI and the default Best Practices Rules.
  • It then downloads PBI Inspector and its default rules.
  • Then the Tabular Editor CLI runs the Best Practices Analyzer to check the Best Practices rules against all the semantic models in the repository.
  • Afterwards, it runs PBI Inspector to check the quality of all the reports in the repository.

Below is a diagram that shows how this can work when you want to merge a change from a feature workspace to a development workspace.

Continuous Integration tests when merging feature workspace changes to development
Continuous Integration tests when merging feature workspace changes to development

It is a very efficient way to perform integration tests on your semantic model without running any Fabric items. Which means you can run it on any workspace that has Microsoft Fabric Git integration configured. Including ones backed by Power BI Premium capacities.

You can customize the above process. For example, you can add additional rules or create your own ones.

TMDL support when testing semantic models in Microsoft Fabric feature workspaces with Tabular Editor

One key point I want to highlight is relating to the Tabular Editor 2 CLI. Even though it does not show it in the article the latest version of the command line does support the Tabular Model Definition Language (TMDL) file format.

Run BPA in a notebook as an automated check

I recommend running the Tabular Editor CLI as an automated test instead of running BPA in a notebook due to the fact that it is the simplest method of the two to implement.

Providing you follow the article created by Microsoft and remember to store a copy of the YAML pipeline in the branch you apply the branch policy in.

Even though I covered previously that you can authenticate as a service principal to run a Microsoft Fabric notebook from Azure DevOps there a lot of things to consider if you want to attempt to run the BPA through semantic link in the notebook.

Considerations if looking to automate running BPA in a notebook

Some of the considerations when looking to automate BPA running inside a notebook include the following points.

  1. Authentication to run both the notebook and the semantic link module together can be tricky to implement.
    In some instances, it can appear the notebook has completed successfully in your pipeline when in reality it has an issue. You must check the logs in your notebook run history to confirm it has actually completed.
  2. You need to consider adding the semantic_link_labs library to a Microsoft Fabric environment. Plus, a strategy to routinely update the version of the library in the environment.
  3. If you intend to run sempy_labs.run_model_bpa_bulk in a centralized workspace you must consider how you intend to identify your feature workspaces.
    I strongly recommend that you introduce a naming convention so that your workspaces are the same as your branch names.
    Doing this will allow you to reference the SYSTEM_PULLREQUEST_SOURCEBRANCH Azure Pipeline system variable and pass it as a notebook parameter for the workspace name.
  4. Once sempy_labs.run_model_bpa_bulk has completed you need to decide how to test the items stored in the Lakehouse.
    My recommendation is to use pytest like I covered in a previous post about unit tests. In order to cause the pipeline to fail based on the criteria you specify.
    For example, if a severity level of “Error” is detected.

As you can see, looking to implement is very involved. With this in mind, I currently recommend implementing the Tabular Editor 2 CLI method.

Final words about paths to test semantic models in Microsoft Fabric feature workspaces with BPA

I hope that sharing these happy together paths to test semantic models in Microsoft Fabric feature workspaces with BPA encourages more of you to implement more thorough testing.

Because catching issues at an early stage can save a lot of time and make your updates more efficient.

Of course, if you have any comments or queries about this post feel free to reach out to me.

Published inAzure DevOpsMicrosoft FabricUnit testing

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *