AWS Feed
Manage and process your big data workflows with Amazon MWAA and Amazon EMR on Amazon EKS

Many customers are gathering large amount of data, generated from different sources such as IoT devices, clickstream events from websites, and more. To efficiently extract insights from the data, you have to perform various transformations and apply different business logic on your data. These processes require complex workflow management to schedule jobs and manage dependencies between jobs, and require monitoring to ensure that the transformed data is always accurate and up to date.

One popular orchestration tool for managing workflows is Apache Airflow. Airflow is a platform created by the community to programmatically author, schedule, and monitor workflows. At AWS re:Invent 2020, we announced Amazon Managed Workflows for Apache Airflow (Amazon MWAA), a managed orchestration service for Airflow that makes it easy to build data processing workflows on AWS.

To perform the various data transformations, you can use Apache Spark, a widely used cluster-computing software framework that is open source, fast, and general purpose. One of the most popular cloud-based solutions to process vast amounts of data is Amazon EMR.

You may be looking for a scalable, containerized platform to run your Apache Spark workloads. With the proliferation of Kubernetes, it’s common to migrate your workloads to the platform in order to orchestrate and manage your containerized applications and benefit from the scalability, portability, extensibility, and speed that Kubernetes promises to offer.

Amazon EMR on Amazon EKS and Amazon MWAA help you remove the cost of managing your own Airflow environments and use Amazon Elastic Kubernetes Service (Amazon EKS) to run your Spark workloads with EMR runtime for Apache Spark. This post talks about the architecture design of the environment and walks through a demo to showcase the benefits these new AWS services can offer.

In this post, we show you how to build a Spark data processing pipeline that uses Amazon MWAA as a primary workflow management service for scheduling the job. The compute layer is managed by the latest EMR on EKS deployment option, which allows you to configure and run Spark workloads on Amazon EKS.

In our demo, we use the New York Citi Bike dataset, which includes rider demographics and trip data, etc. The data pipeline is triggered on a schedule to analyze ridership as new data comes in. The results allow up-to-date insights into ridership with regards to demographic groups as well as station utilization. This kind of information helps the city improve Citi Bike to satisfy the needs of New Yorkers.

Architecture overview

The following diagram shows the high-level architecture of our solution. First, we copy data from the Citi Bike data source Amazon Simple Storage Service (Amazon S3) bucket into the S3 bucket in our AWS account. Then we submit a Spark job through EMR on EKS that creates the aggregations from the raw data. The solution allows us to submit Spark jobs to the Amazon EKS cluster with Amazon Elastic Cloud Compute (Amazon EC2) node groups as well as with AWS Fargate.

bdb1232 worfklows mwaa emr eks 1

Orchestrate the big data workflow

With Airflow, data engineers define Directed Acyclic Graphs (DAGs). Airflow’s rich scheduling support allows you to trigger this DAG on a monthly basis according to the Citi Bike dataset update frequency.

This DAG includes the following tasks:

  1. PythonOperator downloads an updated Citi Bike dataset from a public repository and uploads it to an S3 bucket.
  2. The EmrContainersStartJobRun operator submits a Spark job to an Amazon EKS cluster through the new Amazon EMR containers API. The Spark job converts the raw CSV files into Parquet format and performs analytics with SparkSQL.
  3. EmrContainersJobRunSensor monitors the Spark job for completion.

For more information about operators, see Amazon EMR Operators.

Submit the Apache Spark job

AWS has recently launched an Airflow plugin for EMR on EKS that you can use with Amazon MWAA by adding it to the custom plugin location or with a self-managed Airflow. The plugin includes an operator and a sensor that interact with the new Amazon EMR containers API, which was introduced as part of the new EMR on EKS deployment option.

The Amazon EKS namespace is registered with an Amazon EMR virtual cluster. To submit a Spark job to the virtual cluster, the Airflow plugin uses the start-job-run command offered by the Amazon EMR containers API.

For each job submitted to a virtual cluster, Amazon EMR creates a container with everything that is required and submits a Spark application to an Amazon EKS cluster through Spark on Kubernetes support.

The EmrContainersStartJobRun Airflow operator exposes the arguments of the start-job-run command and can override the default Spark properties such as driver memory or number of cores. You can configure these properties in the DAG by specifying the sparkSubmitParameters in the jobDriver or the configuration-overrides argument.

Choose Amazon EKS data plane options

The Amazon EKS data plane supports EC2 node groups as well as Fargate (for more information, see Amazon EKS on AWS Fargate Now Generally Available). In our demo, the Amazon EKS cluster contains an EC2 node group, a Fargate profile, and two Kubernetes namespaces. One of the namespaces is declared in the Fargate profile pod selector and the pods deployed into this namespace are launched on Fargate. The pods deployed into the other namespace are launched on the EC2 node group. We register each Amazon EKS namespace with a virtual cluster.

When you choose a data plane option, you should consider the following:

  • Compute provisioning
  • Storage provisioning
  • Job initialization time

Compute provisioning

With EC2 node groups, you can share EC2 resources like vCPUs and memory between different Spark jobs or with other workloads within the same Amazon EKS cluster.

Running a Spark job on Fargate removes the need to keep running EC2 worker nodes and allows you to provision right-sized capacity as the job is submitted. Unlike Amazon EC2, each Kubernetes pod is allocated a virtual machine, so the pod runs on the dedicated resources of the VM. When job is complete and the pod exits, you’re not billed for any resources.

For EC2 node groups, you can also use Kubernetes node selectors to run a Spark driver and executor pods on a subset of available nodes such as running nodes in a single Availability Zone.

Storage provisioning

By default, with EC2 node groups Spark uses ephemeral storage for intermediate outputs and data that doesn’t fit in RAM. If you need more storage or need to share data across applications, you can use Kubernetes persistent volumes to mount a volume on Spark pods.

With Fargate, local storage disk space is limited and fits those Spark workloads that don’t need a significant amount of storage for shuffling operations.

Job initialization time

Using EC2 node groups allows the Spark jobs to start immediately as you submit them.

Spark jobs on Fargate add several minutes to the startup time. You may choose to run a Spark driver on Amazon EC2 for quicker startup time while running Spark executors on Fargate. In this case, configure the Fargate profile pod selector to include pods that match the Kubernetes label emr-containers.amazonaws.com/component: executor.

Deploy the resources with AWS CloudFormation

This post provides an AWS CloudFormation template for a one-click deployment experience to set up all the necessary resources for our demo.

The following diagram illustrates the infrastructure of our solution.

bdb1232 worfklows mwaa emr eks 2

The Amazon EKS EC2 node group is deployed into two private subnets in a VPC, and we have a NAT gateway in the public subnets. The Fargate profile is configured to launch pods into the same private subnets. An AWS EC2 host instance pre-configured with relevant tools is launched into one of the private subnets to help you configure the EKS cluster post the CloudFormation deployment.

The main stack creates an S3 bucket to store the plugins and Python libraries for the Amazon MWAA environment and to store the result files from the Citi Bike analytics workflow. The main stack then calls three nested stacks sequentially to create the following resources:

  • Amazon EKS stack – Creates a VPC with two public and two private subnets, and an Amazon EKS cluster with three worker nodes in the private subnets and a control plane with a public API server.
  • Prep stack – Creates a host EC2 instance preinstalled with the AWS Command Line Interface (AWS CLI), kubectl, and eksctl in one of the two private subnets, and can only be accessed through Session Manager in AWS System Manager. The host EC2 instance also has scripts available to configure the kubectl client and register the Amazon EC2 and Fargate namespaces with EMR virtual clusters.
  • Amazon MWAA Stack – Creates an Amazon MWAA environment with the EMR on EKS plugin and appropriate Boto3 library installed.

To get started, you need to have an AWS account. If you don’t have one, sign up for one before completing the following steps:

  1. Sign in to the AWS Management Console as an AWS Identity and Access Management (IAM) power user, preferably an admin user.
  2. Choose Launch Stack:
    LaunchStack

This template has been tested in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions. If you want to deploy in a Region other than these three, check service availability.

  1. Choose Next.
    bdb1232 worfklows mwaa emr eks 3
  2. For Stack name, enter a name for the stack, for example, mwaa-emr-on-eks-blog.

Don’t use a stack name longer than 20 characters.

  1. You can specify your choice of Amazon EKS cluster name, Amazon EKS node instance type, Fargate namespace, and Amazon MWAA environment name, or use the default values.
  2. Choose Create Stack.

The stack takes about 35 minutes to complete. After the stack is deployed successfully, navigate to the Outputs tab of the stack details in the main stack and save the key-value pairs for reference.

bdb1232 worfklows mwaa emr eks 4

  1. On the Systems Manager console, choose Session Manager in the navigation pane.
  2. Choose Start session.
  3. Select the EC2 host instance named <stack name>-xxxx-PrepStack-xxxx-jumphost.
  4. Choose Start session.
    bdb1232 worfklows mwaa emr eks 5

A terminal of the host instance opens up.

  1. Export the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables for the user that deployed the CloudFormation stack.

For more information, see Environment variables to configure the AWS CLI.

  1. At the prompt, enter the following command to configure the Kubernetes client (kubectl) and download the certificate from the Amazon EKS control plane for authentication:
    sh /tmp/kubeconfig.sh

You should see the script running with an Amazon EKS node availability status update.

bdb1232 worfklows mwaa emr eks 6

Fargate logging is configured in this step too. You can download the kubeconfig.sh script for reference.

Next, we register an Amazon EKS namespace with an EMR virtual cluster.

  1. Issue a command by entering sh /tmp/emroneks.sh <namespace> <EMR virtual cluster name>. For example:
    sh /tmp/emroneks.sh ec2-ns ec2-vc

This command automates the steps required to setup EMR on EKS. It also generates a DAG file called citibike_all_dag.py and copies it to the dags folder of the S3 bucket provisioned by the CloudFormation stack.

The DAG file is picked up by Airflow scheduler and displayed in the Airflow UI. Choose the output URL link in the terminal to go to the Amazon MWAA console and open the Airflow UI.

bdb1232 worfklows mwaa emr eks 7

You can download the emroneks.sh script for reference. It takes about 10–15 seconds for the DAG to show up on the Airflow UI; refresh the browser if it doesn’t appear.

bdb1232 worfklows mwaa emr eks 8

At this point, if you go to the Amazon EMR console and choose Virtual clusters, you should see a virtual cluster created accordingly.

bdb1232 worfklows mwaa emr eks 9

You can also submit the job to an Amazon EKS namespace (fargate-ns) backed by Fargate. To do so, go back to the terminal session and enter the command sh emroneks-fargate.sh <EMR Fargate virtual cluster>. For example:

sh /tmp/emroneks-fargate.sh fargate-vc

This command registers the Fargate namespace on Amazon EKS with an EMR virtual cluster and uploads the DAG file to the S3 bucket for Airflow to pick up. You can download emroneks-fargate.sh for reference.

Build the environment manually in your account

Alternatively, you can build this solution manually in your AWS account by following the instructions in this section. You can skip this section and go directly to the next section if you want to start exploring the Airflow UI right away to run the DAG.

This post uses an AWS Cloud9 IDE, but you can use any machine with access to AWS. Use an IAM user or role with the AdministratorAccess policy in your AWS credentials chain. To use AWS Cloud9, complete the following steps:

  1. Set up a workspace.
  2. Create an IAM role with administrator access and either attach the IAM role to the EC2 instance or export your AWS credentials before running the commands.
  3. Set the environment variables that are used throughout this guide: the EKS cluster name, EMR virtual cluster names, and Kubernetes namespaces:
    export AWS_ACCESS_KEY_ID=<Your AWS access key>
    export AWS_SECRET_ACCESS_KEY=<Your AWS secret access key>
    export region=us-east-1
    export eks_cluster_name=eks-cluster
    export virtual_cluster_name_ec2=ec2-vc
    export virtual_cluster_name_fargate=fargate-vc
    export eks_fargate_namespace=fargate-ns
    export eks_ec2_namespace=ec2-ns
    

Now you’re ready to set up EMR on EKS.

  1. Install the AWS CLI (already preinstalled on AWS Cloud9) and kubectl:
    curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
    sudo mv /tmp/eksctl /usr/local/bin
  2. Install eksctl:
    sudo curl --silent --location -o /usr/local/bin/kubectl  https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.11/2020-09-18/bin/linux/amd64/kubectl
    sudo chmod +x /usr/local/bin/kubectl sudo pip install --upgrade awscli && hash -r
  3. Create an Amazon EKS cluster with an EC2 node group, Fargate profile, and OIDC provider (this process takes 15 minutes):
    curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
    cat << EOF > cluster-config.yaml
    ---
    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig metadata: name: ${eks_cluster_name} region: ${region}
    iam: withOIDC: true managedNodeGroups: - name: ng instanceType: m5.2xlarge desiredCapacity: 2 fargateProfiles: - name: citibike-fargate selectors: - namespace: ${eks_fargate_namespace}
    EOF
    eksctl create cluster -f cluster-config.yaml 

To make access easier in the manual steps, the nodes are in public subnets, but you can modify private networking to make them private.

  1. Create two namespaces:
    kubectl create ns ${eks_ec2_namespace}
    kubectl create ns ${eks_fargate_namespace}
  2. Create a Kubernetes role, bind the role to a Kubernetes user, and map the Kubernetes user with the service linked role AWSServiceRoleForAmazonEMRContainers:
    eksctl create iamidentitymapping --cluster ${eks_cluster_name} --namespace ${eks_ec2_namespace} --service-name emr-containers --region ${region}
    eksctl create iamidentitymapping --cluster ${eks_cluster_name} --namespace ${eks_fargate_namespace} --service-name emr-containers --region ${region}
  3. Create a job execution role for Amazon EMR:
    wget https://aws-bigdata-blog.s3.amazonaws.com/artifacts/managing-bigdata-workflows-mwaa-emr-on-eks/json/emr-job-execution-policy.json
    wget https://aws-bigdata-blog.s3.amazonaws.com/artifacts/managing-bigdata-workflows-mwaa-emr-on-eks/json/emr-assume-policy.json aws iam create-role --role-name EMROnEKSExecutionRole --assume-role-policy-document file://emr-assume-policy.json
    aws iam put-role-policy --role-name EMROnEKSExecutionRole --policy-name Permissions-Policy-For-EMR-EKS --policy-document file://emr-job-execution-policy.json
  4. Update the trust policy:
    aws emr-containers update-role-trust-policy 
    --cluster-name ${eks_cluster_name} 
    --namespace ${eks_ec2_namespace} 
    --role-name EMROnEKSExecutionRole 
    --region ${region} aws emr-containers update-role-trust-policy 
    --cluster-name ${eks_cluster_name} 
    --namespace ${eks_fargate_namespace} 
    --role-name EMROnEKSExecutionRole 
    --region ${region}
  5. Create two virtual clusters:
    --name ${virtual_cluster_name_ec2} 
    --region ${region} 
    --container-provider '{ "id": "'"$eks_cluster_name"'", "type": "EKS", "info": { "eksInfo": { "namespace": "'"$eks_ec2_namespace"'" } }
    }' aws emr-containers create-virtual-cluster 
    --name ${virtual_cluster_name_fargate} 
    --region ${region} 
    --container-provider '{ "id": "'"$eks_cluster_name"'", "type": "EKS", "info": { "eksInfo": { "namespace": "'"$eks_fargate_namespace"'" } }
    }'
  6. On the Amazon S3 console, create a new bucket called airflow-bucket-<your-account-id>-my-mwaa-env.

Make sure that the bucket has Block Public Access enabled.

  1. In AWS Cloud9, export the name of the new bucket:
    export account_id=`aws sts get-caller-identity --region $region --output text | awk '{print $1}'`
    export airflow_bucket=airflow-bucket-${account_id}-my-mwaa-env
  2. Populate the bucket with the Airflow custom operator plugin, the requirements.txt for dependencies to be installed on Airflow worker nodes, the dags folder with two DAGs, and the Spark application code.

Before we copy the DAG, we replace the placeholder IDs in the DAG template file with the actual bucket name and virtual cluster IDs for Fargate and EC2 namespaces.

export emr_execution_role_arn=arn:aws:iam::${account_id}:role/EMROnEKSExecutionRole
export virtual_cluster_id_ec2=`aws emr-containers list-virtual-clusters --region $region --output text | grep $eks_cluster_name -a1 | grep -i -w RUNNING | grep -w $virtual_cluster_name_ec2 | awk '{print $4}'`
export virtual_cluster_id_fargate=`aws emr-containers list-virtual-clusters --region $region --output text | grep $eks_cluster_name -a1 | grep -i -w RUNNING | grep -w $virtual_cluster_name_fargate | awk '{print $4}'` aws s3 cp s3://aws-bigdata-blog/artifacts/managing-bigdata-workflows-mwaa-emr-on-eks/mwaa/requirements.txt s3://${airflow_bucket}/
aws s3 cp s3://aws-bigdata-blog/artifacts/managing-bigdata-workflows-mwaa-emr-on-eks/dag/citibike-spark-all.py s3://${airflow_bucket}/
aws s3 cp s3://aws-bigdata-blog/artifacts/managing-bigdata-workflows-mwaa-emr-on-eks/dag/citibike_all_dag.py citibike_all_dag.py.template
aws s3 cp s3://aws-bigdata-blog/artifacts/managing-bigdata-workflows-mwaa-emr-on-eks/mwaa/emr_containers_airflow_plugin.zip s3://${airflow_bucket}/ sudo sed -e s#AIRFLOW_BUCKET#$airflow_bucket# -e s#VIRTUAL_CLUSTER_ID#$virtual_cluster_id_ec2# -e s#EMR_EXECUTION_ROLE_ARN#$emr_execution_role_arn# -e s#Citibike_Ridership_Analytics#Citibike_Ridership_Analytics_EC2# citibike_all_dag.py.template > citibike_all_ec2_dag.py
sudo sed -e s#AIRFLOW_BUCKET#$airflow_bucket# -e s#VIRTUAL_CLUSTER_ID#$virtual_cluster_id_fargate# -e s#EMR_EXECUTION_ROLE_ARN#$emr_execution_role_arn# -e s#Citibike_Ridership_Analytics#Citibike_Ridership_Analytics_Fargate# citibike_all_dag.py.template > citibike_all_fargate_dag.py aws s3 cp citibike_all_fargate_dag.py s3://${airflow_bucket}/dags/
aws s3 cp citibike_all_ec2_dag.py s3://${airflow_bucket}/dags/

At this point, your bucket should be populated with everything it needs.

  1. On the Amazon MWAA console, choose Create environment.
  2. For Name, enter my-mwaa-env.
  3. For S3 bucket, enter s3://<your_airflow_bucket_name>.
  4. For DAGs folder, enter s3://<your_airflow_bucket_name>/dags.
  5. For Plugins file, enter s3://<your_airflow_bucket_name>/emr_containers_airflow_plugin.zip.
  6. For Requirements file, enter s3://<your_airflow_bucket_name>/requirements.txt.
  7. Choose Next.
    bdb1232 worfklows mwaa emr eks 10
  1. On the Configure advanced settings page, under Networking, choose the VPC of the EKS cluster.

To find the VPC, navigate to Amazon EKS console, choose your cluster, then choose Configuration and Networking.

  1. Under Subnets, select the private subnets.
  2. Under Web server access, select Public Network.
  3. For Execution Role and Security Group, select Create New.
  4. Keep the remaining values at their defaults and choose Create new environment.

For more details, see Create an Amazon MWAA environment.

As part of the Amazon MWAA environment, an IAM role is created. You need to add permissions to this IAM role in order to access the public tripdata bucket as well as permission to invoke jobs on EMR on EKS. You can find the role on the Edit environment page under Permissions and Execution role.

  1. Replace the bucket name below and add the following privileges to the policy attached to the role.
     { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetObject*" ], "Resource": [ "arn:aws:s3:::tripdata", "arn:aws:s3:::tripdata/*" ] }, { "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::<your bucket name>", "arn:aws:s3:::<your bucket name>/*" ] }, { "Action": [ "emr-containers:StartJobRun", "emr-containers:ListJobRuns", "emr-containers:DescribeJobRun", "emr-containers:CancelJobRun" ], "Resource": "*", "Effect": "Allow" }

    bdb1232 worfklows mwaa emr eks 21

Run the Citi Bike ridership analytics DAG on Airflow

On the Airflow UI, you can see two DAGS: Citibike_Ridership_Analytics and Citibike_ridership_Analytics_Fargate, which were copied to the S3 bucket. Switch them on or off by choosing On or Off.

bdb1232 worfklows mwaa emr eks 12

Choose the DAG name then Graph View to visualize the job workflow (see the following screenshot). Citi Bike publishes their monthly datasets in .zip format to a public S3 bucket (s3://tripdata), so Airflow spins up 12 parallel tasks to copy and unzip the files to the csv folder in the S3 bucket (each handles a month of data), then kicks off a PySpark task that transforms the CSV files into the Parquet columnar storage format (saved in the parquet folder). Then it spins up a task named start_citibike_ridership_analytics that uses SparkSQL to query the dataset, and finally it saves the results to the results folder.

You can download the PySpark script citibike-spark-all.py for your reference.

Choose the Code tab to see the sample source code of the DAG. You can also download the DAG citibike_all_dag.py for reference.

bdb1232 worfklows mwaa emr eks 13

By default, no schedule is created to run the DAG, so we need to manually trigger the DAG. Choose the Trigger DAG tab to start the job flow. Switch to Tree View and monitor the progress on the Refresh tab . You can see the task squares change colors indicating different status (light green for running and dark green for success status). You should see all squares in dark green when the job completes successfully.

bdb1232 worfklows mwaa emr eks 14

While the start_citibike_ridership_analytics task is running, you can go back to the host instance terminal and enter the command watch kubectl get pod --namespace ec2-ns to see the Spark driver and executor containers get spun up to process the data:

bdb1232 worfklows mwaa emr eks 15

You can also tail the log of job progress with the following command:

kubectl logs -c spark-kubernetes-driver -n ec2-ns -f <spark driver pod name from above command>

bdb1232 worfklows mwaa emr eks 16

These logs are also sent to the Amazon CloudWatch log group named /emr-containers/jobs/. To view them, go to the CloudWatch console and choose Log groups in the navigation pane, You can find /emr-container/jobs/ in the list and choose it to see the detailed logs produced by this job run.

bdb1232 worfklows mwaa emr eks 17

You can change the log group name by modifying the logGroupName of the CONFIGURATION_OVERRIDES section in the JOB_DRIVER definition (see the following code snippet of the DAG citibike_all_day.py). You have to resubmit the DAG (copying the modified DAG to the dags folder of the S3 bucket) for Airflow to pick it up.

JOB_DRIVER = {"sparkSubmitJobDriver": { "entryPoint": "s3://" + afbucket + "/citibike-spark-all.py", "entryPointArguments": [bucket], "sparkSubmitParameters": "--conf spark.executor.instances=3 --conf " "spark.executor.memory=4G --conf spark.driver.memory=2G --conf spark.executor.cores=2 " "--conf spark.sql.shuffle.partitions=60 --conf spark.dynamicAllocation.enabled=false"}} CONFIGURATION_OVERRIDES = { "monitoringConfiguration": { "cloudWatchMonitoringConfiguration": { "logGroupName": "/emr-containers/jobs", "logStreamNamePrefix": "blog" }, "persistentAppUI": "ENABLED", "s3MonitoringConfiguration": { "logUri": "s3://" + afbucket + "/joblogs" } }
}

Finally, to view the analytics results, go to the Amazon S3 console and choose the bucket that was provisioned by CloudFormation, usually by the name pattern airflow-bucket-xxxxx-<stack name>-xxxxx. Go to the citibike folder, and further into the results subfolder. Choose the ridership subfolder to see a CSV file with  the name pattern part-xxxxx.csv. This is the query result of total trips by month in 2020. You can see March and April have the lowest numbers when the city was hit hardest by the COVID-19 pandemic.

bdb1232 worfklows mwaa emr eks 18

To view the Spark history server UI, navigate to the Virtual clusters section on the Amazon EMR console, choose the job, and choose View logs.

bdb1232 worfklows mwaa emr eks 13

This launches the web UI in the Spark History server with jobs, stages, Spark event logs, and other details. The Spark history server UI is available during job runs and is stored for 30 days after job creation.

bdb1232 worfklows mwaa emr eks 19

Clean up the resources deployed by CloudFormation stack

You may want to clean up the demo environment and any resources you deployed when you’re done. On the AWS CloudFormation console, select the template and choose Delete. Make sure that the delete operation is successful and all the resources were removed.

This action also deletes the S3 bucket and any data in it. If you want to retain the data for future use, you should make a copy of the bucket before you delete it. However, for virtual clusters and related resources that were created by scripts, delete them with the code in manual Step 3 in below section.

Clean up the resources deployed by the manual procedure

  1. On the Amazon MWAA Console, delete the environment created in step 11
  2. On the  Amazon S3 console, empty and delete the bucket created in step 8
  3. In your terminal, run the following script to delete the resources created by the manual steps

aws iam delete-role-policy --role-name EMROnEKSExecutionRole --policy-name Permissions-Policy-For-EMR-EKS

aws iam delete-role --role-name EMROnEKSExecutionRole

aws emr-containers delete-virtual-cluster --id ${virtual_cluster_id_ec2} --region $region aws emr-containers delete-virtual-cluster --id ${virtual_cluster_id_fargate} --region $region eksctl delete cluster -f cluster-config.yaml

Conclusion

In our post, we showed how you can orchestrate an ETL pipeline using Amazon MWAA with EMR on EKS. We created an Airflow DAG to trigger scheduled periodic Spark jobs that process data using a custom Airflow operator.

You can use the provided CloudFormation template or the manual procedure to get started today runnning your Spark jobs on Amazon EKS.


About the Authors

James SunJames Sun is a Senior Solutions Architect with Amazon Web Services. James has several years of experience in information technology. Prior to AWS, he held several senior technical positions at MapR, HP, NetApp, Yahoo, and EMC. He holds a PhD from Stanford University.

 

 

dima breydo 100Dima Breydo is a Senior Solutions Architect with Amazon Web Services. He helps Startups to architect solutions in the cloud. He is passionate about container-based solutions and big data technologies.

 

 

alon gendler 100Alon Gendler is a Senior Startup Solutions Architect with Amazon Web Services. He works with AWS customers to help them architect secure, resilient, scalable and high-performance applications in the cloud.