Amazon Web Services Feed
Automate dataset monitoring in Amazon QuickSight

Amazon QuickSight is an analytics service that you can use to create datasets, perform one-time analyses, and build visualizations and dashboards. In an enterprise deployment of QuickSight, you can have multiple dashboards, and each dashboard can have multiple visualizations based on multiple datasets. This can quickly become a management overhead to view all the datasets’ status with their latest refresh timestamp.

This post demonstrates how to visualize datasets associated with all the dashboards in your account, with their latest refresh status and refresh time.

Solution overview

The following screenshot illustrates the architecture of the solution.

The architecture includes the following steps:

  1. You create the datasets and tag them via an AWS Lambda
  2. A second function gets the refresh status from the tagged datasets.
  3. The function stores the refresh status in Amazon Simple Storage Service (Amazon S3).
  4. You query the refresh status in Amazon Athena.
  5. You visualize the refresh status in QuickSight.

A QuickSight deployment can have multiple dashboards and each dashboard can have multiple datasets associated with it. You can end up having hundreds of datasets. It’s difficult to know if all the underlying datasets are refreshing as required unless you check them manually. However, QuickSight sends email notifications to the dataset owner on its dataset refresh failure. This solution provides a holistic view of all datasets’ refreshes.

The aim is to create a dashboard that monitors the refresh of the existing datasets and provides refresh status for the datasets.

To implement the solution, you must create the following:

  • A Lambda execution role for QuickSight.
  • A scheduled Lambda function to tag the datasets.
  • A scheduled Lambda function to get the last refresh status of the datasets and store it in Amazon S3.
  • An external table in Athena on top of the S3 bucket.
  • A QuickSight dashboard using Athena as the data source, which provides the datasets’ last refresh status.

This post assumes that you have existing analyses and dashboards with numerous datasets.

Creating a Lambda execution role for QuickSight

Your first step is to create a Lambda execution role that allows you to perform tagging and create QuickSight analysis, datasets, and data sources. The role should be able to describe and update them. The following code is an example role policy (replace the bucket name with the bucket for storing the QuickSight ingestion results):

{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "quicksight:CreateDashboard", "quicksight:List*", "quicksight:Describe*", "quicksight:Tag*", "tag:GetResources" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "s3:PutObject", "Resource": [ "arn:aws:s3:::bucket-name-to-store-ingestion/*" ] } ]
}

Creating a scheduled Lambda function to tag the datasets

The next step is to identify all the datasets required for your dashboard and tag them. It’s easier to do this right after you create the dataset. Complete the following steps:

  1. On the QuickSight console, choose Manage data.

  1. Choose your dataset and choose Edit dataset.

  1. Record the dataset ID from the URL (data-sets/<dataset ID>/prepare).

Alternatively, you can use a Lambda function to find the dataset name and ID. See the following code (replace the AwsAccountID with your ID):

import boto3
client = boto3.client('quicksight')
def lambda_handler(event, context): response=client.list_data_sets( AwsAccountId='1234567890', MaxResults=100 ) for r in response['DataSetSummaries']: dataset_info={} dataset_info['id']=r['DataSetId'] dataset_info['name']=r['Name'] print (dataset_info['name']+':'+dataset_info['id'])

The function provides all the datasets in your account. Make sure to record the dataset IDs specific to your dashboard.

  1. Create your Lambda function.
  2. Tag the datasets per your individual dashboards. See the following code (use the target dashboard name and ID to create the tagging key, and replace the dataset_ids and account number with your own):
import boto3
client = boto3.client('quicksight')
acct_id = '123456678788'
def lambda_handler(event, context): dataset_ids=['0e994f54-8d08-4b64-98ca-195cf7b46077','16d5bf20-4415-42d1-b54c-9aba95b13d67','5c5fd93a-0bb6-468f-a0c4-ff1c15597d20'] for i in dataset_ids: response = client.tag_resource( ResourceArn='arn:aws:quicksight:us-east-1:123456678788:dataset/{}'.format(i), Tags=[ { 'Key' : 'DashboardName', 'Value' : 'QuickSight_refresh_status_demo' } ] )

You can do this for all your dashboards. The only limitation is you can only tag one dataset to one dashboard name key pair.

If you tag the datasets with a wrong key, you can remove them using an untag call and replace the ResourceArn with the specific dataset ARN. See the following code:

 response = client.untag_resource( ResourceArn='arn:aws:quicksight:us-east-1:123456678788:dataset/794e28ae-2b89-49ef-b885-196c95bfd4f8', TagKeys=[ 'DashboardName' ] )

Creating a Lambda function to get the last refresh status

The next step is to configure a Lambda function that gets the last refresh status of the tagged datasets and loads it into Amazon S3. You use resourcegroupstaggingapi to get back all the resources with a particular key. For this post, the key is the DashboardName. From the response of the ResourceTagMappingList, you filter out the dataset ID and dataset ARN. You also get the data source ARN and name for each dataset associated with the particular key value. Finally, you list the ingestions for all the datasets and classify them as one of the following:

  • Failed – The last refresh failed.
  • Did not run within last 24 hours – No ingestion ID in the last 24 hours (the time is configurable). You explicitly use this status even if the previous run before the last 24 hours succeeded or failed. This makes sure the datasets adhere to a certain refresh schedule. For this post, you want the datasets to refresh one time a day.
  • Error – No ingestion ID for more than 90 days.

See the following code (replace the placeholder text with your specific values):

import json
import boto3
import csv
from botocore.exceptions import ClientError
from datetime import datetime
from datetime import timedelta
from datetime import timezone
import jmespath glue = boto3.client('glue')
s3= boto3.client('s3')
client = boto3.client('resourcegroupstaggingapi')
client1 = boto3.client('quicksight')
dataset_ids=[]
AwsAccountId='123456678788'
def lambda_handler(event, context): items=[] tagfilters=[ { 'Key': 'DashboardName', 'Values': [ 'QuickSight_refresh_status_demo' #add dashboard name ] }, ] response = client.get_resources( TagFilters=tagfilters ) # Get the response back for each of the above listed Key Values resources = response['ResourceTagMappingList'] for resource in resources: perm="" data={} permission=[] data['resource_ARN']=resource['ResourceARN'] dataset_id_arn=data['resource_ARN'].split('/') data['dataset_id']=dataset_id_arn[1] # For each of the above dataset , describe the dataset to get the data source response = client1.describe_data_set( AwsAccountId=AwsAccountId, DataSetId=data['dataset_id'] ) datasourcearn = jmespath.search('DataSet.PhysicalTableMap.*.*.DataSourceArn',response) datasourcearnid= str(datasourcearn[0]).split('/') datasourcearnid=datasourcearnid[1] datasourcearnid=datasourcearnid.replace("']",'') data['DatasetName']=response.get('DataSet').get('Name') response = client1.describe_data_source( AwsAccountId=AwsAccountId, DataSourceId=datasourcearnid ) datasourcename=response.get('DataSource').get('Name') data['DataSourceName']=datasourcename resource_tags = resource['Tags'] for tag in resource_tags: if tag['Key'] == 'DashboardName': data['dashboard_name'] = tag['Value'] response1= client1.list_ingestions( DataSetId=data['dataset_id'], AwsAccountId=AwsAccountId, MaxResults=1 # To get the latest ingestion, if you want history you can change this number ) if response1.get('Ingestions'): for i in response1['Ingestions']: data['IngestionId']=i['IngestionId'] data['CreatedTime']=i['CreatedTime'] try: response = client1.describe_ingestion(DataSetId=data['dataset_id'],IngestionId=data['IngestionId'],AwsAccountId=AwsAccountId) if response: data['Time']=str(response['Ingestion']['CreatedTime']) if ((datetime.utcnow() - response['Ingestion']['CreatedTime'].replace(tzinfo=None)).total_seconds()) >= (24*60*60): #Check the refresh status within last 24 hours, you can change this per your requirement data['Status']='Did not run within last 24 hrs' else: data['Status']=response['Ingestion']['IngestionStatus'] except ClientError as e: data['Time']='Failed, Check if dataset is being used' data['Status']=e.response['Error']['Message'] items.append(data) row=['DashboardName,DatsetName,Status,Time,DataSourceName'] csv_key='quicksight-dashboard-metada/report.csv' for data in items: row.append(data['dashboard_name'] +','+data['DatasetName']+','+data['Status']+','+data['Time']+','+data['DataSourceName']) values = 'n'.join(str(v) for v in row) response = s3.put_object( Body=values, Bucket='bucketname', Key=csv_key )

The last status run is now stored in a .csv file in the specific bucket mentioned in the Lambda function (see the following screenshot).

You can also schedule your function to run at a certain frequency, depending on when you want to check the status.

Creating an external table in Athena on top of Amazon S3

Now you can create an external Athena table on top of the .csv file you stored in Amazon S3 and query it. Use the following table definition for reference (replace the location with the location of your S3 bucket):

CREATE EXTERNAL TABLE IF NOT EXISTS qs_meta_table
( `DashboardName` string, `DatasetName` string, `Status` string, `LastRefershTime` string, `DataSourceName` string ) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES ( "separatorChar" = "," ) LOCATION 's3://bucketname/quicksight-dashboard-metada/' TBLPROPERTIES ( "skip.header.line.count"="1")

You can get the latest status of the dataset refreshes by querying the table with SQL in Athena.

Creating a QuickSight dashboard using Athena as the data source

To visualize this data and share it with others, build a dashboard on top of the data in QuickSight. The following screenshot shows the listed dashboards.

You first create a dataset for the Athena table.

  1. On the QuickSight console, choose Manage data.
  2. Choose Create dataset.

You use Athena as the source for your dataset. If you don’t have an existing Athena data source, you can create a new one. For instructions, see Creating a Data Source.

  1. Choose the table you just created.

  1. Select Import to SPICE for quicker analysis.

Depending on the size of your dataset and expected latency, you can choose Directly query your data instead. If you use SPICE, remember to add a refresh schedule for the dataset.

  1. Create an analysis from the dataset.

For this post, choose a table visual type and drag all the columns to the Value field well.

You can create the visualization as in the following screenshot, with conditional formatting to highlight failed and successful loads.

  1. To publish the dashboard, choose Share on the application bar of the analysis.
  2. Choose Publish dashboard.

  1. For Publish new dashboard as, enter a name for your dashboard.

You can now share the dashboard with end-users.

Conclusion

In this post, we described how to create a QuickSight dashboard that can track the last refresh status of all the datasets in your account. The dashboard provides a single pane view of the status of all the datasets and avoids the manual effort of opening and checking each individual dataset.

 


About the authors

Ginni Malik is an Associate Cloud Developer with AWS.

 

 

 

 

 

Rohan Jamadagni is a Solutions Architect with AWS.