AWS Feed
Hydrate your data lake with SaaS application data using Amazon AppFlow
Organizations today want to make data-driven decisions. The data could lie in multiple source systems, such as line of business applications, log files, connected devices, social media, and many more. As organizations adopt software as a service (SaaS) applications, data becomes increasingly fragmented and trapped in different “data islands.” To make decision-making easier, organizations are building data lakes, which is a centralized repository that allows you to store all your structured and unstructured data at any scale. You can store your data as is, without having to first structure the data, and run different types of analytics—from dashboards and visualizations to big data processing, ad hoc analytics, and machine learning (ML) to guide better decisions.
AWS provides services such as AWS Glue, AWS Lake Formation, Amazon Database Migration Service (AWS DMS), and many third-party solutions on AWS Marketplace to integrate data from various source systems into the Amazon Simple Storage Service (Amazon S3) data lake. If you’re using SaaS applications like Salesforce, Marketo, Slack, and ServiceNow to run your business, you may need to integrate data from these sources into your data lake. You likely also want to easily integrate these data sources without writing or managing any code. This is precisely where you can use Amazon AppFlow.
Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between SaaS applications like Salesforce, Marketo, Slack, and ServiceNow and AWS services like Amazon S3 and Amazon Redshift. With Amazon AppFlow, you can run data flows at nearly any scale at the frequency you choose—on a schedule, in response to a business event in real time, or on demand. You can configure data transformations such as data masking and concatenation of fields as well as validate and filter data (omitting records that don’t fit a criteria) to generate rich, ready-to-use data as part of the flow itself, without additional steps. Amazon AppFlow automatically encrypts data in motion, and optionally allows you to restrict data from flowing over the public internet for SaaS applications that are integrated with AWS PrivateLink, reducing exposure to security threats. For a complete list of all the SaaS applications that can be integrated with Amazon AppFlow, see Amazon AppFlow integrations.
In this post, we look at how to integrate data from Salesforce into a data lake and query the data via Amazon Athena. Amazon AppFlow recently announced multiple new capabilities such as availability of APIs and integration with AWS CloudFormation. We take advantage of these new capabilities and deploy the solution using a CloudFormation template.
Solution architecture
The following diagram depicts the architecture of the solution that we deploy using AWS CloudFormation.
As seen in the diagram, we use Amazon AppFlow to integrate data from Salesforce into a data lake on Amazon S3. We then use Athena to query this data with the table definitions residing in the AWS Glue Data Catalog.
Deploy the solution with AWS CloudFormation
We use AWS CloudFormation to deploy the solution components in your AWS account. Choose an AWS Region for deployment where the following services are available:
- Amazon AppFlow
- AWS Glue
- Amazon S3
- Athena
You need to meet the following prerequisites before deploying the solution:
- Have a Salesforce account with credentials authorized to pull data using APIs.
- If you’re deploying the stack in an account using the Lake Formation permission model, validate the following settings:
- The AWS Identity and Access Management (IAM) user used to deploy the stack is added as a data lake administrator under Lake Formation, or the IAM user used to deploy the stack has IAM privileges to create databases in the AWS Glue Data Catalog.
- The Data Catalog settings under Lake Formation are configured to use only IAM access control for new databases and new tables in new databases. This makes sure that all access to the newly created databases and tables in the Data Catalog are controlled solely using IAM permissions. The following screenshot shows the Data catalog settings page on the Lake Formation console, where you can set these permissions.
These Lake Formation settings are required so that all permissions to the Data Catalog objects are controlled using IAM only.
Although you need these Lake Formation settings for the CloudFormation stack to deploy properly, in a production setting we recommend you use Lake Formation to govern access to the data in the data lake. For more information about Lake Formation, see What Is AWS Lake Formation?
We now deploy the solution and the following components:
- An Amazon AppFlow flow to integrate Salesforce account data into Amazon S3
- An AWS Glue Data Catalog database
- An AWS Glue crawler to crawl the data pulled into Amazon S3 so that it can be queried using Athena.
- On the Amazon AppFlow console, on the Connections page, choose Create connection.
- For Connection name, enter a name for your connection.
- Choose Continue.
You’re redirected to the Salesforce login page, where you enter your Salesforce account credentials.
- Enter the appropriate credentials and grant OAuth2 access to the Amazon AppFlow client in the next step, after which a new connector profile is set up in your AWS account.
- To deploy the remaining solution components, choose Launch Stack:
- For Stack name, enter an appropriate name for the CloudFormation stack.
- For Parameters, enter the name of the Salesforce connection you created.
- Choose Next.
- Follow through the CloudFormation stack creation wizard, leaving rest of the default values unchanged.
- On the final page, select I acknowledge that AWS CloudFormation might create IAM resources with custom names.
- Choose Create stack.
- Wait for the stack status to change to CREATE_COMPLETE.
- On the Outputs tab of the stack, record the name of the S3 bucket.
Run the flow
The CloudFormation stack has deployed a flow named SFDCAccount
. Open the flow to see the configuration. The flow has been configured to do the following:
- Pull the account object from your Salesforce account into a S3 bucket. The flow pulls certain attributes from the object in Parquet format.
- Mask the last five digits of the phone number associated with the Salesforce account.
- Build a validation on the Account ID field that ignores the record if the value is NULL.
Make sure that all these attributes pulled by the flow are part of your account object in Salesforce. Make any additional changes that you may want to the flow and save the flow.
- Run the flow by choosing Run flow.
- When the flow is complete, navigate to the S3 bucket created by the CloudFormation stack to confirm its contents.
The Salesforce account data is stored in Parquet format in the SFDCData/SFDCAccount/
folder in the S3 bucket.
- On the AWS Glue console, run the crawler
AppFlowGlueCrawler
.
This crawler has been created by the CloudFormation stack and is configured to crawl the S3 bucket and create a table in the appflowblogdb
database in the Data Catalog.
When the crawler is complete, a table named SFDCAccount
exists in the appflowblogdb
database.
- On the Athena console, run the following query:
The output shows the data pulled by the Amazon AppFlow flow into the S3 bucket.
Clean up
When you’re done exploring the solution, complete the following steps to clean up the resources deployed by AWS CloudFormation:
- Empty the S3 bucket created by the CloudFormation stack.
- Delete the CloudFormation stack.
Conclusion
In this post, we saw how you can easily set up an Amazon AppFlow flow to integrate data from Salesforce into your data lake. Amazon Appflow allows you to integrate data from many other SaaS applications into your data lake. After the data lands in Amazon S3, you can take it further for downstream processing using services like Amazon EMR and AWS Glue. You can then use the data in the data lake for multiple analytics use cases ranging from dashboards to ad hoc analytics and ML.
About the Authors
Ninad Phatak is a Principal Data Architect at Amazon Development Center India. He specializes in data engineering and datawarehousing technologies and helps customers architect their analytics use cases and platforms on AWS.
Vinay Kondapi is Head of product for Amazon AppFlow. He specializes in Application and data integration with SaaS products at AWS.