Amazon Web Services Feed
Setting up automated data quality workflows and alerts using AWS Glue DataBrew and AWS Lambda
Proper data management is critical to successful, data-driven decision-making. An increasingly large number of customers are adopting data lakes to realize deeper insights from big data. As part of this, you need clean and trusted data in order to gain insights that lead to improvements in your business. As the saying goes, garbage in is garbage out—the analysis is only as good as the data that drives it.
Organizations today have continuously incoming data that may develop slight changes in schema, quality, or profile over a period of time. To ensure data is always of high quality, we need to consistently profile new data, evaluate that it meets our business rules, alert for problems in the data, and fix any issues. In this post, we leverage AWS Glue DataBrew, a visual data preparation tool that makes it easy to profile and prepare data for analytics and machine learning (ML). We demonstrate how to use DataBrew to publish data quality statistics and build a solution around it to automate data quality alerts.
Overview of solution
In this post, we walk through a solution that sets up a recurring profile job to determine data quality metrics and, using your defined business rules, report on the validity of the data. The following diagram illustrates the architecture.
The steps in this solution are as follows:
- Periodically send raw data to Amazon Simple Storage Service (Amazon S3) for storage.
- Read the raw data in Amazon S3 and generate a scheduled DataBrew profile job to determine data quality.
- Write the DataBrew profile job output to Amazon S3.
- Trigger an Amazon EventBridge event after job completion.
- Invoke an AWS Lambda function based on the event, which reads the profile output from Amazon S3 and determines whether the output meets data quality business rules.
- Publish the results to an Amazon Simple Notification Service (Amazon SNS) topic.
- Subscribe email addresses to the SNS topic to inform members of your organization.
Prerequisites
For this walkthrough, you should have the following prerequisites:
- An AWS account
Deploying the solution
For a quick start of this solution, you can deploy the provided AWS CloudFormation stack. This creates all the required resources in your account (us-east-1
Region). Follow the rest of this post for a deeper dive into the resources.
- Choose Launch Stack:
- In Parameters, for Email, enter an email address that can receive notifications.
- Scroll to the end of the form and select I acknowledge that AWS CloudFormation might create IAM resources.
- Choose Create stack.
It takes a few minutes for the stack creation to complete; you can follow progress on the Events tab.
- Check your email inbox and choose Confirm subscription in the email from AWS Notifications.
The default behavior of the deployed stack runs the profile on Sundays. You can start a one-time run from the DataBrew console to try out the end-to-end solution.
Setting up your source data in Amazon S3
In this post, we use an open dataset of New York City Taxi trip record data from The Registry of Open Data on AWS. This dataset represents a collection of CSV files defining trips taken by taxis and for-hire vehicles in New York City. Each record contains the pick-up and drop-off IDs and timestamps, distance, passenger count, tip amount, fair amount, and total amount. For the purpose of illustration, we use a static dataset; in a real-world use case, we would use a dataset that is refreshed at a defined interval.
You can download the sample dataset (us-east-1
Region) and follow the instructions for this solution, or use your own data that gets dumped into your data lake on a recurring basis. We recommend creating all your resources in the same account and Region. If you use the sample dataset, choose us-east-1
.
Creating a DataBrew profile job
To get insights into the quality of our data, we run a DataBrew profile job on a recurring basis. This profile provides us with a statistical summary of our dataset, including value distributions, sparseness, cardinality, and type determination.
Connecting a DataBrew dataset
To connect your dataset, complete the following steps:
- On the DataBrew console, in the navigation pane, choose Datasets.
- Choose Connect new dataset.
- Enter a name for the dataset.
- For Enter your source from S3, enter the S3 path of your data source. In our case, this is
s3://nyc-tlc/misc/
. - Select your dataset (for this post, we choose the medallions trips dataset
FOIL_medallion_trips_june17.csv
).
- Scroll to the end of the form and choose Create dataset.
Creating the profile job
You’re now ready to create your profile job.
- In the navigation pane, choose Datasets.
- On the Datasets page, select the dataset that you created in the previous step. The row in the table should be highlighted.
- Choose Run data profile.
- Select Create profile job.
- For Job output settings, enter an S3 path as destination for the profile results. Make sure to note down the S3 bucket and key, because you use it later in this tutorial.
- For Permissions, choose a role that has access to your input and output S3 paths. For details on required permissions, see DataBrew permission documentation.
- On the Associate schedule drop-down menu, choose Create new schedule.
- For Schedule name, enter a name for the schedule.
- For Run frequency, choose a frequency based on the time and rate at which your data is refreshed.
- Choose Add.
- Choose Create and run job.
The job run on sample data typically takes 2 minutes to complete.
Exploring the data profile
Now that we’ve run our profile job, we can expose insightful characteristics about our dataset. We can also review the results of the profile through the visualizations of the DataBrew console or by reading the raw JSON results in our S3 bucket.
The profile analyzes both at a dataset level and column level granularity. Looking at our column analytics for String columns, we have the following statistics:
- MissingCount – The number of missing values in the dataset
- UniqueCount – The number of unique values in the dataset
- Datatype – The data type of the column
- CommonValues – The top 100 most common strings and their occurrences
- Min – The length of the shortest String value
- Max – The length of the longest String value
- Mean – The average length of the values
- Median – The middle value in terms of character count
- Mode – The most common String value length
- StandardDeviation – The standard deviation for the lengths of the String values
For numerical columns, we have the following:
- Min – The minimum value
- FifthPercentile – The value that represents 5th percentile (5% of values fall below this and 95% fall above)
- Q1 – The value that represents 25th percentile (25% of values fall below this and 75% fall above)
- Median – The value that represents 50th percentile (50% of values fall below this and 50% fall above)
- Q3 – The value that represents 75th percentile (75% of values fall below this and 25% fall above)
- NinetyFifthPercentile – The value that represents 95th percentile (95% of values fall below this and 5% fall above)
- Max – The highest value
- Range – The difference between the highest and lowest values
- InterquartileRange – The range between the 25th percentile and 75th percentile values
- StandardDeviation – The standard deviation of the values (measures the variation of values)
- Kurtosis – The kurtosis of the values (measures the heaviness of the tails in the distribution)
- Skewness – The skewness of the values (measures symmetry in the distribution)
- Sum – The sum of the values
- Mean – The average of the values
- Variance – The variance of the values (measures divergence from the mean)
- CommonValues – A list of the most common values in the column and their occurrence count
- MinimumValues – A list of the 5 minimum values in the list and their occurrence count
- MaximumValues – A list of the 5 maximum values in the list and their occurrence count
- MissingCount – The number of missing values
- UniqueCount – The number of unique values
- ZerosCount – The number of zeros
- Datatype – The datatype of the column
- Min – The minimum value
- Max – The maximum value
- Median – The middle value
- Mean – The average value
- Mode – The most common value
Finally, at a dataset level, we have an overview of the profile as well as cross-column analytics:
- DatasetName – The name of the dataset the profile was run on
- Size – The size of the data source in KB
- Source – The source of the dataset (for example, Amazon S3)
- Location – The location of the data source
- CreatedBy – The ARN of the user that created the profile job
- SampleSize – The number of rows used in the profile
- MissingCount – The total number of missing cells
- DuplicateRowCount – The number of duplicate rows in the dataset
- StringColumnsCount – The number of columns that are of String type
- NumberColumnsCount – The number of columns that are of numeric type
- BooleanColumnsCount – The number of columns that are of Boolean type
- MissingWarningCount – The number of warnings on columns due to missing values
- DuplicateWarningCount – The number of warnings on columns due to duplicate values
- JobStarted – A timestamp indicating when the job started
- JobEnded – A timestamp indicating when the job ended
- Correlations – The statistical relationship between columns
By default, the DataBrew profile is run on a 20,000-row First-N sample of your dataset. If you want to increase the limit and run the profile on your entire dataset, send a request to databrew-feedback@amazon.com.
Creating an SNS topic and subscription
Amazon SNS allows us to deliver messages regarding the quality of our data reliably and at scale. For this post, we create an SNS topic and subscription. The topic provides us with a central communication channel that we can broadcast to when the job completes, and the subscription is then used to receive the messages published to our topic. For our solution, we use an email protocol in the subscription in order to send the profile results to the stakeholders in our organization.
Creating the SNS topic
To create your topic, complete the following steps:
- On the Amazon SNS console, in the navigation pane, choose Topics.
- Choose Create topic.
- For Type, select Standard.
- For Name, enter a name for the topic.
- Choose Create topic.
- Take note of the ARN in the topic details to use later.
Creating the SNS subscription
To create your subscription, complete the following steps:
- In the navigation pane, choose Subscriptions.
- Choose Create subscription.
- For Topic ARN, choose the topic that you created in the previous step.
- For Protocol, choose Email.
- For Endpoint, enter an email address that can receive notifications.
- Choose Create subscription.
- Check your email inbox and choose Confirm subscription in the email from AWS Notifications.
Creating a Lambda function for business rule validation
The profile has provided us with an understanding of the characteristics of our data. Now we can create business rules that ensure we’re consistently monitoring the quality our data.
For our sample taxi dataset, we will validate the following:
- Making sure the
pu_loc_id
anddo_loc_id
columns meet a completeness rate of 90%. - If more than 10% of the data in those columns is missing, we’ll notify our team that the data needs to be reviewed.
Creating the Lambda function
To create your function, complete the following steps:
- On the Lambda console, in the navigation pane, choose Functions.
- Choose Create function.
- For Function name¸ enter a name for the function.
- For Runtime, choose the language you want to write the function in. If you want to use the code sample provided in this tutorial, choose Python 3.8.
- Choose Create function.
Adding a destination to the Lambda function
You now add a destination to your function.
- On the Designer page, choose Add destination.
- For Condition, select On success.
- For Destination type, choose SNS topic.
- For Destination, choose the SNS topic from the previous step.
- Choose Save.
Authoring the Lambda function
For the function code, enter the following sample code or author your own function that parses the DataBrew profile job JSON and verifies it meets your organization’s business rules.
If you use the sample code, make sure to fill in the values of the required parameters to match your configuration:
- topicArn – The resource identifier for the SNS topic. You find this on the Amazon SNS console’s topic details page (for example,
topicArn = 'arn:aws:sns:us-east-1:012345678901:databrew-profile-topic'
). - profileOutputBucket – The S3 bucket the profile job is set to output to. You can find this on the DataBrew console’s job details page (for example,
profileOutputBucket = 'taxi-data'
). - profileOutputPathKey – The S3 key the profile job is set to output to. You can find this on the DataBrew console’s job details page (for example,
profileOutputPathKey = profile-out/'
). If you’re writing directly to an S3 bucket, keep this as an empty String (profileOutputPathKey = ''
).
Updating the Lambda function’s permissions
In this final step of configuring your Lambda function, you update your function’s permissions.
- In the Lambda function editor, choose the Permissions tab.
- For Execution role, choose the role name to navigate to the AWS Identity and Access Management (IAM) console.
- In the Role summary, choose Add inline policy.
- For Service, choose S3.
- For Actions, under List, choose ListBucket.
- For Actions, under Read, choose Get Object.
- In the Resources section, for bucket, choose Add ARN.
- Enter the bucket name you used for your output data in the create profile job step.
- In the modal, choose Add.
- For object, choose Add ARN.
- For bucket name, enter the bucket name you used for your output data in the create profile job step and append the key (for example,
taxi-data/profile-out
). - For object name, choose Any. This provides read access to all objects in the chosen path.
- In the modal, choose Add.
- Choose Review policy.
- On the Review policy page, enter a name.
- Choose Create policy.
We return to the Lambda function to add a trigger later, so keep the Lambda service page open in a tab as you continue to the next step, adding an EventBridge rule.
Creating an EventBridge rule for job run completion
EventBridge is a serverless event bus service that we can configure to connect applications. For this post, we configure an EventBridge rule to route DataBrew job completion events to our Lambda function. When our profile job is complete, the event triggers the function to process the results.
Creating the EventBridge rule
To create our rule in EventBridge, complete the following steps:
- On the EventBridge console, in the navigation pane, choose Rules.
- Choose Create rule.
- Enter a name and description for the rule.
- In the Define pattern section, select Event pattern.
- For Event matching pattern, select Pre-defined pattern by service.
- For Service provider, choose AWS.
- For Service name, choose AWS Glue DataBrew.
- For Event type, choose DataBrew Job State Change.
- For Target, choose Lambda function.
- For Function, choose the name of the Lambda function you created in the previous step.
- Choose Create.
Adding the EventBridge rule as the Lambda function trigger
To add your rule as the function trigger, complete the following steps:
- Navigate back to your Lambda function configuration page from the previous step.
- In the Designer, choose Add trigger.
- For Trigger configuration, choose EventBridge (CloudWatch Events).
- For Rule, choose the EventBridge rule you created in the previous step.
- Choose Add.
Testing your system
That’s it! We’ve completed all the steps required for this solution to run periodically. To give it an end-to-end test, we can run our profile job once and wait for the resulting email to get our results.
- On the DataBrew console, in the navigation pane, choose Jobs.
- On the Profile jobs tab, select the job that you created. The row in the table should be highlighted.
- Choose Run job.
- In the Run job modal, choose Run job.
A few minutes after the job is complete, you should receive an email notifying you of the results of your business rule validation logic.
Cleaning up
To avoid incurring future charges, delete the resources created during this walkthrough.
Conclusion
In this post, we walked through how to use DataBrew alongside Amazon S3, Lambda, EventBridge, and Amazon SNS to automatically send data quality alerts. We encourage you to extend this solution by customizing the business rule validation to meet your unique business needs.
About the Authors
Romi Boimer is a Sr. Software Development Engineer at AWS and a technical lead for AWS Glue DataBrew. She designs and builds solutions that enable customers to efficiently prepare and manage their data. Romi has a passion for aerial arts, in her spare time she enjoys fighting gravity and hanging from fabric.
Shilpa Mohan is a Sr. UX designer at AWS and leads the design of AWS Glue DataBrew. With over 13 years of experience across multiple enterprise domains, she is currently crafting products for Database, Analytics and AI services for AWS. Shilpa is a passionate creator, she spends her time creating anything from content, photographs to crafts.