In this post I want to cover four ways to monitor a semantic model refresh in Microsoft Fabric.
I wanted to do this post because there is only one reference to semantic models in the DP-700 study guide. Which is to monitor a semantic model refresh. So I wanted to create this unique post to show you some options.
Those of you who are looking to take the DP-700 exam are welcome to view my updated checklist for the DP-700 exam. Which I recently published after the exam became Generally Available (GA).
By the end of this post, you will know various options to monitor a semantic model refresh. Plus, some ideas on how to extend them. Along the way I share plenty of links.
All the examples I show in this post are based on an example that I showed in a previous post. Where I covered a review of the ESG metrics capability in Microsoft Fabric.
1: Directly from the workspace
You can refresh your semantic model directly within your Microsoft Fabric workspace by clicking the refresh icon next to the semantic model’s name.
You can see the refresh happening in the workspace whilst it is running in the “Refreshed” column. In addition, you can click on the ellipsis next to the Semantic model and click “Refresh history”.
Doing this brings up the refresh history of the Semantic model. If you click on the refresh activity that is taking place all you will see is that it is in progress. However, once completed you do a get a bit more verbose information.
One key point I want to highlight is that when you click on the semantic model in the workspace to view the details you can get to the same window. By clicking on the arrow next to “Refresh” in the top right-hand corner and selecting the history there.
2: Monitor hub
Regardless of how you run the semantic model refresh you can monitor its progress in the Monitor hub. Which you can access from the left-hand options in Microsoft Fabric.
If you open it whilst an activity is running it will tell you that it is in progress. In addition, you can customize the columns to get additional information.
You can click on the information icon on the left to view additional details about the activity. Such as the status, start time and end time.
If you click on the ellipsis (…) next to the name you also get the option to view the details. Plus, the additional option to view historical runs in a tabular format. Which you can then export to a CSV file.
3: Pipeline output
One of the reasons I chose the ESG Metrics example for this post is that it comes with a sample Data Pipeline. Which allows me to easily demonstrate how you can monitor the progress within a Microsoft Fabric Data Pipeline.
Due to the fact that when you run a semantic model refresh activity in a Data Pipeline you can view the activity output directly in the pipeline. By selecting the activity and then the output icon next to it.
In the output you can the start and end times at the top. You can then scroll down to see the status of all the tables selected and then finally the duration of the entire activity at the bottom.
Like in the truncated example below.
{
"startTime": "2025-01-09T18:07:53.163Z",
"endTime": "2025-01-09T18:08:26.753Z",
"type": "Full",
"commitMode": "Transactional",
"status": "Completed",
"extendedStatus": "Completed",
"currentRefreshType": "Full",
"numberOfAttempts": 0,
"objects": [
{
"table": "ESG_Measures",
"partition": "ESG_Measures",
"status": "Completed"
},
....
],
"refreshAttempts": [
{
"attemptId": 1,
"startTime": "2025-01-09T18:07:53.5323991Z",
"endTime": "2025-01-09T18:08:26.7530952Z",
"type": 0
}
],
"ResponseHeaders": {
"Pragma": "no-cache",
"Transfer-Encoding": "chunked",
"x-ms-root-activity-id": "382773ba-d098-487a-9b21-de58ac22c29e",
"Strict-Transport-Security": "max-age=31536000; includeSubDomains",
"X-Frame-Options": "deny",
"X-Content-Type-Options": "nosniff",
"RequestId": "382773ba-d098-487a-9b21-de58ac22c29e",
"Access-Control-Expose-Headers": "RequestId",
"Cache-Control": "no-store, must-revalidate, no-cache",
"Date": "Thu, 09 Jan 2025 18:08:39 GMT",
"Content-Type": "application/json; charset=utf-8"
},
"executionDuration": 47
}
4: Semantic link
To monitor a semantic model refresh with semantic link, you can run the below code in a notebook:
import sempy.fabric as fabric
from pyspark.sql import SparkSession
from pyspark.sql.functions import unix_timestamp, col
spark = SparkSession.builder.appName("TimeDifference").getOrCreate()
dataset="SDS_ESGM_ESGDemo_DatasetForMetricsDashboard_DTST"
df = fabric.list_refresh_requests(dataset=dataset)
spark_df = spark.createDataFrame(df)
spark_df = spark_df.withColumn("Duration_in_Seconds", (unix_timestamp(col("End Time")) - unix_timestamp(col("Start Time"))).cast("int"))
display(spark_df)
I must give some credit to Sandeep Pawar as far as the above code is concerned. Since his post on how to refresh individual tables and partitions with semantic link helped me to get list_refresh_requests working.
Notice that the above code adds an additional column for the duration. This is to make it easier to visualize the results. Like I covered in a previous post where I showed four places you can visualize data in Microsoft Fabric.
After running the code, you get the below results.
Because of the additional column you can easily visualize the results in a line chart.
As you can see, doing something like this helps you identify trends. Plus, you can include multiple semantic model refreshes to help quickly identify common patterns.
You can also extend the visualizations by adding additional libraries. Which I covered in a previous post.
Final words about these four ways to monitor a semantic model refresh in Microsoft Fabric
I hope that by sharing four ways to monitor a semantic model refresh in Microsoft Fabric helps you realize there are multiple ways to monitor semantic model refreshes.
Because this knowledge can help you. Both in the real-world and when preparing for and DP-700 exam.
Of course, if you have any comments or queries about this post feel free to reach out to me.
There you go: https://github.com/microsoft/fabric-toolbox/tree/main/monitoring/workspace-monitoring-dashboards