In this post I cover two ways to perform CI/CD for SQL databases in Fabric using GitHub Actions.
By showing two different ways you can create the dacpac file with GitHub Actions. One way from on a database project and another by directly extracting the schema of the database into a dacpac file. As per the below diagram.
SQL database in Fabric is a new offering that is currently in public preview. You can read more about it in a post I wrote that covers spreading your SQL Server wings with SQL database in Fabric.
I provide plenty of links in this post. If you need help with any jargon used in this post, then I recommend that you read one my other posts. Which is a Microsoft Fabric Git integration jargon guide for Fabricators.
This post consists of six main sections:
- Sample repository.
- Prerequisites.
- Method one to create a dacpac file.
- Method two to create a dacpac file.
- Deploying the created dacpac to other SQL databases in Fabric.
- Final words.
Those who wish to see an Azure DevOps alternative can read my other post. Where I cover two ways to perform CI/CD for SQL databases in Fabric using YAML Pipelines.
Sample repository
I created a public GitHub repository called GitHub-SQLDatabaseinFabric to accompany this post.
It is free for you to download/clone and use as a template to perform either of the methods shown in this post. My only request is that if it proves to be useful, please give it a star in GitHub.
Prerequisites to perform CI/CD for SQL databases in Fabric using GitHub Actions
For the benefit of this post, I specify Microsoft-hosted agents so you can setup your pipeline quickly. However, I recommend looking to configure self-hosted agent ongoing.
Plus, for the benefit of this post I configured both GitHub secrets and variables. Which is why you can see variables used in the pipelines. For example, ${{secrets.SOURCEDB_CONNECTION_STRING}}.
You can look to work with Azure Key Vault secrets instead of GitHub secrets and call the secrets directly in the GitHub workflow instead.
Method one to create a dacpac: From a SQL database in Fabric database project
In order to create a dacpac file based on a database project you must first create the initial database project and synchronize it to GitHub.
Create a database project based on a SQL database in Fabric
You can create a Database Project based on a SQL database in Fabric with a variety of applications. In this post I show how to create one using Azure Data Studio with the SQL Database Projects extension installed.
You can get the details to the SQL database in Fabric with Azure Data Studio by going into the SQL query editor for the database in Microsoft Fabric. You can then select the Open in button to connect.
From there, you can create the database project. You can then initialize the folder it is stored in as a Git repository and synchronize it with a repository in GitHub.
I covered something similar in a previous post about how to create a dacpac for an Azure Synapse Analytics dedicated SQL Pool using GitHub Actions. Since the only difference is that you synchronize a database project for a SQL database in Fabric to GitHub instead of a SQL Pool.
One thing I did notice during previous testing is that the wrong target platform was specified for the database project. I changed the target platform to 160 (SQL Server 2022).
However, you can keep an eye on future tooling updates and decide which one will work best for yourselves.
Creating the dacpac file via a GitHub workflow
Once the database project is synchronized in GitHub you can create a dacpac via a GitHub workflow. To help with some jargon, you can think of a workflow as similar to a YAML pipeline in Azure DevOps.
You can setup a workflow by clicking on Actions tab and selecting the option to setup a workflow yourself.
You can specify the triggers and variables at the start of the workflow. Like in the below code.
name: Update SQL database in Fabric from a Database Project
# Sets the trigger to be triggered manually (other triggers are available)
on:
workflow_dispatch:
# Specify variables
env:
ARTIFACT: SQLdbinFabric
Afterwards, you can add the code to checkout the repository and then create and publish the dacpac with the three steps within the same job. As per the below example.
# Job to build and publish the dacpac
BuildDacpac:
# I use a GitHub-Hosted runner here, you can opt to use a self-hosted one instead
runs-on: windows-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v4.2.2
# Build dacpac using dotnet build
- name: Build Database project
run: |
dotnet build
# Publish artifact
- uses: actions/upload-artifact@v4.5.0
with:
name: ${{env.ARTIFACT}}
path: ${{ github.workspace }}/bin/Debug/
You can then run the workflow from the Actions tab. Once the workflow has completed you can see that an artifact has been published. Which contains the dacpac file.
You can download the published artifact by selecting it in the summary. From there you can extract the downloaded zip file to view its contents.
Method two to create a dacpac: Extracting the schema from an existing SQL database in Fabric
Alternatively, you can use SqlPackage in a GitHub workflow to extract the schema of an existing SQL database in Fabric into a dacpac file.
In order to read from the database, you can add a service principal to the database to managing the database access.
You can add the service principal to the db_ddladmin role using the functionality to manage SQL database-level roles from Fabric portal.
Just like in the first method you create your workflow in the Actions tab. You can then specify the trigger and variables at the start of the workflow.
Afterwards, you can add the code to create and publish the dacpac with three steps within a single job.
First of all, you must install the latest version of SqlPackage with the below step.
# Install latest version of SqlPackage
- name: Install latest version of SqlPackage
run: |
dotnet tool update -g microsoft.sqlpackage
After doing so you can extract the schema of an existing SQL database in Fabric with SqlPackage. Setting the destination to be the artifact staging directory. To ensure that the pipeline only publishes the dacpac file. As you can see in the below two tasks.
# Build dacpac using dotnet build
- name: Build Database project
run: |
md dacpac; cd dacpac
SqlPackage /Action:Extract /TargetFile:"SQLDBinFabric.dacpac" /SourceConnectionString:"${{ secrets.SOURCEDB_CONNECTION_STRING }}"
# Publish artifact
- uses: actions/upload-artifact@v4.5.0
with:
name: ${{env.ARTIFACT}}
path: dacpac
I opted to add the entire connection string as a variable to keep my syntax nice and clean. Below is an example of a connection string format that works with a service principal.
Server=tcp:{PROVIDED SERVER NAME IN FABRIC}.database.fabric.microsoft.com,1433;Initial Catalog={PROVIDED CATALOG/DATABASE NAME};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Authentication=Active Directory Service Principal; User Id={SERVICE PRINCIPAL CLIENT ID}; Password={SERVICE PRINCIPAL SECRET}
Note: The database name required in the connection string will vary from the database name you provided. Make sure you enter the right one.
You can get the correct server and database names in Microsoft Fabric by going into SQL query editor. From there, click to connect to Visual Studio code or navigate to get the connection strings.
Verifying the workflow
After adding the tasks, the workflow can be run to confirm it works.
You can then download the published artifact by selecting it in the summary. From there you can extract the downloaded zip file to view its contents.
Deploying the dacpac to other SQL databases in Fabric using GitHub Actions
Once the dacpac is created it is time to add the logic to the workflow to deploy the dacpac. Which can be added to the same workflow you created for either of the above methods to create the dacpac.
However, two prerequisites are required first.
- The service principal needs the relevant permissions in the target database.
- You must ensure the connection string for the destination database is added as a secret. Alternatively, choose another connection strategy.
Once the prerequisites are done you can then create a new stage to your workflow that contains the method to create the dacpac.
Currently, you do not need to update SqlPackage if you created a dacpac with either of the above methods when working with a GitHub-Hosted agent. However, if you wish you can add the below PowerShell at the start of the deployment job to update SqlPackage for certainty.
# Install latest version of SqlPackage on the runner
- name: Update SqlPackage
run: |
dotnet tool install --global Microsoft.SqlPackage
From there, you can download the artifact that contains the dacpac file onto the runner and then specify it as part of the deployment.
- name: download artifact containing dacpac
# Dowloads Data artifact to dacpac folder
uses: actions/download-artifact@v4.1.8
with:
name: ${{env.ARTIFACT}}
# Installs dacpac
# For this to work you MUST have a secret specified, default in my repo is blank
# Requires a secret DESTINATIONDB_CONNECTION_STRING which contains connection string to your SQL database in Fabric
- name: SQL database in Fabric Deploy
uses: Azure/sql-action@v2.3
with:
# The connection string, including authentication information, for the Azure SQL Database.
connection-string: '${{ secrets.DESTINATIONDB_CONNECTION_STRING }}'
# Path to the dacpac file in the artifact
path: 'SQLDBinFabric.dacpac'
# Action we want it to do, in this case 'Publish' the contents of the dacpac to the database
action: 'publish'
# Since no Firewall in-place
skip-firewall-check: 'True'
arguments: '/p:AllowIncompatiblePlatform=True'
Note that I enabled the skip firewall check. This is due to the fact that the check is currently not required when deploying to SQL database in Fabric.
As you can see below, this works with both methods to create a dacpac.
Of course, you can customize these workflows as you see fit. For example, you can add an approvals process with environments.
Final words about these two ways to perform CI/CD for SQL databases in Fabric using GitHub Actions
I really hope that showing these two ways to perform CI/CD for SQL databases in Fabric using GitHub Actions helps to speed up your own implementations of either of these methods.
Do bear in mind that there are other ways you can update your SQL databases in Fabric to other environments. Such as Microsoft Fabric deployment pipelines which I covered in a previous post.
Of course, if you have any comments or queries about this post feel free to reach out to me.
Finally, Happy Holidays to all. For those who celebrate it like I do, have a vantastic Christmas.
Be First to Comment