Skip to content

Two ways to perform CI/CD for SQL databases in Fabric using Azure DevOps

Reading Time: 7 minutes

In this post I want to cover two ways to perform CI/CD for SQL databases in Fabric using Azure DevOps.

By showing two different ways you can create the dacpac file in Azure DevOps. One way from on a database project and another by directly extracting the schema of the database into a dacpac file. As per the below diagram.

Two ways to perform CI/CD for SQL databases in Fabric using Azure DevOps

Just a quick recap, SQL database in Fabric is a new offering that is currently in public preview. You can read more about it in a post I wrote that covers spreading your SQL Server wings with SQL database in Fabric.

To manage expectations, this post covers performing CI/CD for Microsoft Fabric Data Warehouses using classic pipelines. In other words, the GUI-based Classic Editor for the build (CI) and the Releases feature for the deployment (CD).

It consists of three main sections:

  • Method one to create a dacpac file.
  • Method two to create a dacpac file.
  • Deploying the created dacpac to other SQL databases in Fabric.

In addition, I provide plenty of links along the way.

If you need help with any jargon used in this post, then I recommend that you read one my other posts. Which is a Microsoft Fabric Git integration jargon guide for Fabricators.

Prerequisites to perform CI/CD for SQL databases in Fabric

For the benefit of this post, I specify Microsoft-hosted agents so you can all mimic the setup quickly. However, I recommend looking to configure a self-hosted agent ongoing.

Plus, for the benefit of this post I configured a Azure Pipeline variable group. Which is why you can see variables used in the pipelines. For example, $(BuildConfiguration).

Method one to create a dacpac: From a SQL database in Fabric database project

In order to create a dacpac file based on a database project you must first create the initial database project and synchronize it to Azure DevOps.

Create a database project based on a SQL database in Fabric

You can create a Database Project based on a SQL database in Fabric with a variety of applications. In this post I show how to create one using Azure Data Studio with the SQL Database Projects extension installed.

You can get the details to the SQL database in Fabric with Azure Data Studio by going into the SQL query editor for the database in Microsoft Fabric. You can then select the Open in button to connect.

From there, you can create the database project. You can then initialize the folder it is stored in as a Git repository and synchronize it with a repository in Azure DevOps.

I covered how you can do this in detail in a previous post about how to create a dacpac for an Azure Synapse Analytics dedicated SQL Pool using Azure DevOps. You can find the relevant details in the below sections of that post.

  • Create Git repository for SQL Pool in Azure DevOps
  • Synchronize Azure Synapse SQL Pool repository to Azure DevOps

You can follow the above steps because the only difference is that you synchronize a database project for a SQL database in Fabric to Azure Repos instead of a SQL Pool.

Populated Git repository in Azure Repos which you can use to perform CI/CD for SQL databases in Fabric
Populated Git repository in Azure Repos

One thing I did notice whilst exploring the database project is that the wrong target platform was specified for the database project.

After testing and confirming the database compatibility listed in the feature comparison article I decided to change the target platform to 160 (SQL Server 2022). You can decide which one will work best for yourselves.

Creating the dacpac file via a Classic Pipeline

Once the database project is synchronized in Azure DevOps you can create a dacpac vias a pipeline. By creating a pipeline with the Classic Editor in Azure Pipelines and adding the relevant tasks. Like in the below example.

Classic Editor pipeline
Classic Editor pipeline

As you can see, I mentioned the name of the sqlproj here instead of a solution. Which is ideal for when you want only certain projects deployed to a specific database.

Plus, you can enable continuous integration in the ‘Triggers’ tab above. So that the pipeline starts every time a branch in the repository is updated. To help fully implement the CI part of CI/CD.

In the ‘Build dacpac’ .NET core task you state the location of the sqlproj file and then set the configuration in the ‘Arguments’ section as below:

--configuration $(BuildConfiguration)

In addition, you can set the ‘Path to Publish’ section of the Publish Artifact task to be ‘bin\$(BuildConfiguration)’.

You can then run the pipeline. Once the pipeline has completed you can see that an artifact has been published. Which contains the dacpac file.

Completed pipeline with an artifact that can be used as part of CI/CD for SQL databases in Fabric
Completed pipeline with an artifact

Method two to create a dacpac: Extracting the schema from an existing SQL database in Fabric

Alternatively, you can use SqlPackage in Azure DevOps to extract the schema of an existing SQL database in Fabric into a dacpac file.

In order to read from the database, you can add a service principal to the database by managing the database access. Personally, I added the service principal manually first.

CREATE USER [MSFabric] FROM EXTERNAL PROVIDER;

However, to keep this post simple I also added the service principal to the db_ddladmin role using the functionality to manage SQL database-level roles from Fabric portal.

After adding the service principal, you can create the below pipeline in Azure DevOps.

Pipeline to extract the schema into a dacpac file
Pipeline to extract the schema into a dacpac file

As you can see, the first step is to update SqlPackage. In order to support the target platform for the SQL database in Fabric. For the benefit of those who wish to copy and paste, below is the update command:

dotnet tool update -g microsoft.sqlpackage

Once done, you can run the SqlPackage command to extract the schema into a dacpac file.

SqlPackage /Action:Extract /TargetFile:"$(Build.ArtifactStagingDirectory)\SQLDBinFabric.dacpac" /SourceConnectionString:"$(SourceConnString)"

As you can see, the destination is the artifact staging directory. To ensure that the pipeline only publishes the dacpac file.

Plus, I opted to add the entire connection string as a variable to keep my syntax nice and clean. Below is the full construct you need to use for your connection string for this to work with a service principal.

Server=tcp:{PROVIDED SERVER NAME IN FABRIC}.database.fabric.microsoft.com,1433;Initial Catalog={PROVIDED CATALOG/DATABASE NAME};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Authentication=Active Directory Service Principal; User Id={SERVICE PRINCIPAL CLIENT ID}; Password={SERVICE PRINCIPAL SECRET}

Note: The database name required in the connection string will vary from the database name you provided. Make sure you enter the right one.

You can get the correct server and database names in Microsoft Fabric by going into SQL query editor. From there, you can either click to connect to Visual Studio code or navigate to get the connection strings.

Verifying the dacpac has the SQL database in Fabric target platform

After extracting the dacpac the pipeline then publishes the dacpac into Azure DevOps. You can verify that the dacpac exists by selecting the artifact in the pipeline results.

Artifact containing the dacpac
Artifact containing the dacpac

To verify the contents of the dacpac you can download it and rename it with a “.zip” extension. In order extract its contents. You can then check that the correct target platform is stated in “model.xml” file, as below.

DspName="Microsoft.Data.Tools.Schema.Sql.SqlDbFabricDatabaseSchemaProvider" 

Deploying the dacpac to other SQL databases in Fabric

Once the dacpac is created it is time to deploy the dacpac. Which can be done in the Releases feature. Two prerequisites are required beforehand.

First of all, you need to make sure the service principal has the relevant permissions in the target database.

Secondly, after testing I opted to connect to the database with a connection string. If you are looking to do the same, you need to ensure the connection string for the destination database is added as a variable. Alternatively, choose another connection strategy.

You can deploy dacpacs created by both of the methods covered in this post by creating a release pipeline like the below. Selecting the pipeline that created the dacpac as the artifact source.

Release pipeline

You can then add an Azure SQL Database Deployment task to deploy the updates in the dacpac file to the SQL database in Fabric. Like in the below example.

Azure SQL Database Deployment task
Azure SQL Database Deployment task

Here are some key points to remember when configuring the above task.

  1. I opted to specify a connection string to the new database. By adding a variable based on the same construct I shared earlier in this post.
  2. If you decide to use the first method specified in this post that creates a dacpac file from a database project you currently must specify an additional argument in the task. You must add “/p:AllowIncompatiblePlatform=True” due to the compatibility level in the dacpac.
    Once all the related tooling has been updated to support the new compatibility level this can be removed.

I personally have run the release pipeline with dacpac files created by both of methods mentioned in this post. So, I can vouch for the fact that both methods work fine. As long as you remember the points I cover in this post.

Final words about these two ways to perform CI/CD for SQL databases in Fabric using Azure DevOps

I really hope that showing these two ways to perform CI/CD for SQL databases in Fabric using Azure DevOps helps with your implementations. To speed up your own implementations of either of these methods.

Of course, do bear in mind that there are other ways you can update your SQL databases in Fabric to other environments. Such as Microsoft Fabric deployment pipelines which I covered in a previous post.

Of course, if you have any comments or queries about this post feel free to reach out to me.

Published inAzure DevOpsMicrosoft Fabric

2 Comments

    • Kevin Chant Kevin Chant

      Nice series of posts, I covered similar topics for SQL and Azure Synapse SQL Pools a while ago and have some repos of my own. I just went through your posts as always great to see similar stuff from a different perspective. More then welcome to look at some of my repos to see what I mean:

      https://github.com/kevchant

Leave a Reply

Your email address will not be published. Required fields are marked *