This is a guest post by a third-party author. John Preston is an experienced solution architect who enjoys development and who has spent time working on and open sourcing the automation of AWS architecture deployments, including the ECS ComposeX open source project. In this post, John talks about the motivation for this project, and how to build your infrastructure and deploy your services to AWS services using the Docker Compose file format.

Implementing for ECS and Docker Compose what SAM is to serverless for AWS Lambda

Docker Compose helps developers perform local integration testing between their microservices by connecting them logically to each other in a standardized document, but taking that into a cloud platform often requires a lot of time and effort. For example, developers would need to learn how Amazon Elastic Container Service (Amazon ECS) works, the definitions, AWS Identity and Access Management (IAM), networking, and so on, which can mean getting cloud engineers involved to build and provision all of these resources.

In an era in which everything aims to be automated for integration and deployment of applications, ECS ComposeX is an attempt to help developers abstract all the complexity behind Infrastructure as Code (IaC) and deploy services and other AWS resources required into the AWS ecosystem. ECS ComposeX is an open source project created to help solve that problem to use AWS services to deploy microservices onto ECS. To test the project, I needed to continuously deploy various configurations for infrastructure to confirm that features in the project work. Although it was only a few dollars every time, it adds up for an open source project I was working on on my own, so I applied for AWS promotional credits for open source projects. The form was easy to fill in: I simply explained why I was requesting credits, and the services I was planning to use and wanted these credits to cover. So that is financially, but where I am most grateful to AWS is giving me this blogging opportunity to share my project with you.

What does ECS ComposeX do differently?

Where ECS ComposeX distinguishes itself from other tools is embedding security for each service individually, so that developers only have to connect resources logically together in the same way they would use links between microservices in their Docker Compose definition. Each microservice needs to explicitly be declared as a consumer of a resource to get access to it, otherwise it won’t be able to access the resource or other microservices. This is done simply by using AWS IAM policies or security groups ingress, where applicable. In a future release, ECS ComposeX will allow using AWS App Mesh for service-to-service communication. This provides the cloud engineers the peace of mind that the surface of attack to the platform is limited in distributed environments as isolation is achieved for each microservice individually.

That simplified way to define access between services and resources helps with defining a shared-responsibility model between application engineers and cloud engineers. Application engineers must know what their application does and how services interface to each other and to external services. This gives a sense of ownership to the maintainers of the Docker Compose file that defines the application stack resources and services along with resources access and permissions.

Where would you use ECS ComposeX?

One of the biggest automation challenge these days is around standardizing CI/CD pipelines and providing ephemeral environments that are going to be identical to production environments. Developers have grown in maturity over the years to implement tests for each new feature developed, leveraging testing techniques such as TDD (Test Driven Development) or a BDD (Behavior Driven Development) approach. Continuous Integration pipelines and tests are streamlined, and the tooling allows easily performing tests against the application itself.

The challenge remains—especially in distributed environments—to be able to perform integration testing. Docker Compose has helped developers bridge these gaps, but too often this only applies to their local environments, which rarely fully reflect what the cloud environment is like, and tests that pass on one’s laptop is simply not acceptable criteria.

Thus we need a solution to deploy an entire application stack as if it were production, run the tests against it for applications, and get the results. This is where software such as ECS ComposeX can help simplify and decrease the complexity of getting highly paralleled deployment of ephemeral development environments.

Under the hood

I think AWS CloudFormation has always been the obvious choice for IaC on AWS, and templates I wrote more than five years ago still work today. Furthermore, AWS CloudFormation registry brings opportunities to open up AWS as an IaC service that spans beyond the scope of AWS services.

Amazon ECS is a robust Docker orchestration service built to deploy applications into AWS, and the features added to it over the past couple of years make deploying applications with performance, control, monitoring, logging, and security boundaries easier, and all of that integrated closely to AWS constructs to allow architecture best practices.

To generate all the CloudFormation templates from the Docker Compose definition, I am using Python and an open source library called Troposphere.

I wanted to use simple, cloud-native tools, which is why choosing AWS CloudFormation and Amazon ECS was important; this allows people to tune their templates as they see fit and get support from all the AWS community and from AWS directly with their support offerings.

Getting started with ECS ComposeX

At the time when I was writing this article, ECS ComposeX and its examples are presented as a CLI tool that can be executed from a laptop or from a CI/CD pipeline, within something like AWS CodeBuild or CircleCi; however, because ECS ComposeX is a Python package, the objective is to allow embedding it as a Python library into another application, such as a Lambda function. To allow that, ECS ComposeX will be available as a Lambda layer that anyone will be able to use in their Lambda functions.

Use case

The idea is to use ECS ComposeX to generate all the infrastructure needed for an integration environment, so in addition to deploying the services onto Amazon ECS, we also want to create the VPC that will be needed. Let’s walk through using it on a command-line interface and then implementing it into AWS CodeBuild.

Installing ECS ComposeX

# Best creating a venv or installing for user only
python3 -m venv venv
source venv/bin/active
pip install ecs_composex ## For your user only
pip install ecs_composex --user

You will need valid credentials set to run the command in order to perform API calls and create the files into Amazon Simple Storage Service (Amazon S3).

I use awsume, which allows me to switch from one account to another:

# This will set all environment variables up
$(awsume -s Lambda)

Docker Compose file

Starting with the original Docker Compose file:

services: app01: environment: NAME: DISPATCH image: 373709687836.dkr.ecr.eu-west-1.amazonaws.com/blog-app-01@sha256:0bf30cce6c4a58a9d494cd2dcada2c102a9d92e449059ca3d2d7d8c15980cc55 ports: - 80:80 links: - app02 app02: environment: NAME: backend ports: - 8080 image: 373709687836.dkr.ecr.eu-west-1.amazonaws.com/blog-app-03@sha256:e30331fe53304b5fd7ecce973d8e2a1cf91030f362c5df45c5c063d127d138e9

Adding configuration

ECS ComposeX uses the configs section supported by Docker Compose to extend configuration for services. For example, let’s define that the app01 is public, accessible via an AWS application load balancer, and that all services should otherwise be registered with AWS Cloud Map in an Amazon Virtual Private Cloud (Amazon VPC) namespace:

configs: app01: network: is_public: true use_alb: true use_nlb: false composex: network: use_cloudmap: true

Also, as often required for billing and automation purposes, we might want to define AWS tags on our resources so that we can track owners, points of contacts, and so on. With ECS ComposeX, tagging is easily achieved by defining an x-tags section in the Docker Compose file:

x-tags: - name: costcentre value: LambdaMyAws - name: owner value: JohnPreston - name: mail value: john@lambda-my-aws.io

Adding an Amazon RDS cluster

We know that our applications will need a database, and we have chosen RDS Aurora to do so. Following the ECS ComposeX syntax reference for Amazon Relational Database Service (Amazon RDS), add the following to the Docker Compose file:

x-rds: dbA: Properties: Engine: aurora-mysql EngineVersion: 5.7.12 Settings: {} Services: - name: app02 access: RW

Adding SQS queues

Next do the same to add two Amazon Simple Queue Service (Amazon SQS) queues. One of the queues is designated to be the dead letter queue for the previous one:

x-sqs: DLQ: Properties: {} Services: - access: RWMessages name: app02 Queue01: Properties: RedrivePolicy: deadLetterTargetArn: DLQ maxReceiveCount: 1 Services: - access: RWMessages name: app01 Settings: EnvNames: - APP_QUEUE - AppQueue

Putting it all together:

---
# Docker compose file all together
configs: app01: network: is_public: true use_alb: true use_nlb: false composex: network: use_cloudmap: true services: app01: environment: NAME: DISPATCH image: 373709687836.dkr.ecr.eu-west-1.amazonaws.com/blog-app-01@sha256:0bf30cce6c4a58a9d494cd2dcada2c102a9d92e449059ca3d2d7d8c15980cc55 ports: - 80:80 links: - app02 app02: environment: NAME: backend ports: - 8080 image: 373709687836.dkr.ecr.eu-west-1.amazonaws.com/blog-app-03@sha256:e30331fe53304b5fd7ecce973d8e2a1cf91030f362c5df45c5c063d127d138e9 x-rds: dbA: Properties: Engine: aurora-mysql EngineVersion: 5.7.12 Settings: {} Services: - name: app02 access: RW x-sqs: DLQ: Properties: {} Services: - access: RWMessages name: app02 Queue01: Properties: RedrivePolicy: deadLetterTargetArn: DLQ maxReceiveCount: 1 Services: - access: RWMessages name: app01 Settings: EnvNames: - APP_QUEUE - AppQueue x-tags: - name: costcentre value: LambdaMyAws - name: owner value: JohnPreston - name: mail value: john@lambda-my-aws.io

The credentials are ready and the Docker Compose file contains the services and extra settings to indicate how the services connect to each other:

  • app01 will be publicly available via an application load balancer.
    • app01 SG will be attached to the container.
    • The SG associated will allow the application load balancer to send traffic to it.
  • app02 will be private in the VPC, no inbound access from any.
  • app01 has a link to app02, so we will allow the inbound from.
  • SQS queue DLQ, access granted to app02.
  • SQS queue Queue01, access granted to app01.
  • Amazon Aurora cluster, dbA, which gives access to service app02.
    • Creates database credentials in secrets manager.
    • Provides ingress access to app02.
    • Provides the Execution role of app02 the access to the secret.

At that point, you might think that with your existing CloudFormation templates or with your Terraform modules you can build all of that, and you would be correct; however, you would have to write the templates independently, or modules, and deploy the services.

By using the Docker Compose format, developers do not have a big learning curve and can quickly deploy the infrastructure.

# creating a folder to output files to have a local copy of what is in S3.
mkdir outputs
ecs_composex -f docker_compose.yml -d outputs -o demo.yml --create-vpc --create-cluster --single-nat
aws cloudformation  create-stack --capabilities CAPABILITY_IAM CAPABILITY_AUTO_EXPAND --template-body file://outputs.demo.yml --stack-name demo

And that’s it. Now sit back and relax whilst AWS CloudFormation does its magic deploying the infrastructure.

Putting it to play in a pipeline

I have demonstrated that using a Docker Compose file and ECS ComposeX lets us do one-off deployments of the infrastructure. The next step is to automate this within the deployment pipeline. Using the Docker Compose file as the definition of the environment and monitoring that file for changes, we are able to configure triggers in our deployment pipeline that monitor for changes to the Docker Compose file and create a new deployment of the environment.

Stage 1: Detect file change

Usual suspect change detection. Here we are going to detect a change on the master branch of our repository to trigger the pipeline.

Stage 2: Generate the CFN template files

Using AWS CodeBuild, we are going to install ecs_composex from PyPi and execute it against our Docker Compose file. We gather the artifacts and upload these, encrypted, into Amazon S3. Download an example: buildspec.yml.

Stage 3: Deploy to a dev environment

AWS CodePipeline switches accounts and using CloudFormation with pre-established settings is deploying an ephemeral environment. The stage 3.1 that doesn’t show here is to perform tests against the new platform, but this is where you could run Cucumber or other tests you would like to perform to establish that the services are all functional in their environment and consequently check that the environment works.

Stage 4: Destroy the dev environment

We performed the tests and do not need this environment anymore, so let’s destroy it.

Stage 5: Approval to deploy to production

Because many deployments to production require business approval, we put a manual intervention—the only one, in fact—to allow moving onto the next stage

Stage 6: Deploy to production

Using the same templates, same Docker images, same settings, we deploy our services with their updated Docker version and otherwise new settings.

 

AWS CodeBuild and ECS ComposeX

The template for the above pipeline can be found on GitHub. The only thing done manually in this entire pipeline was to create the Amazon Simple Notification Service (Amazon SNS) subscription outside the template.

Note that in the production accounts, not having the underlying network infrastructure under a single stack (with the storage and the services all tight together) is common. Couplings still need to happen, and ECS ComposeX relies on getting the information right to pass onto the rest.

And this is exactly what happened in my example. In the production account, I created the VPC using ecs_composex-vpc to create my VPC network. You do not have to use this to create your VPC; it simply is saving me time to build a VPC. In the repository storing the Docker Compose file, I have stored the VPC subnet IDs and other inputs I need to have in order to create all the rest successfully.

This is only the snapshot of how the CD part of the deployment works. There are many, many ways to think and approach the CI/CD pipelines, and you can find an example of a complete CI/CD pipeline implementation in my blog post “CICD Pipeline for multiple services on AWS ECS with ECS ComposeX.” In the future I plan to implement a discovery feature in ECS ComposeX, allowing users to do discovery based on tags to map subnets, VPCs, and so on.

What next

In this post I have explained challenges with automating architecture deployments that developers face when writing applications. This motivated me to create ECS ComposeX to help solve some of those challenges. I continue adding new features to ECS ComposeX and welcome your feedback and contributions, and you can find the project on GitHub.

Feature image via Pixabay.

John Preston

John Preston

John Preston’s early career quickly took him into working with AWS, and he rapidly grew passionate about best practices and automating architecture deployment. With a drive for automation, AWS, and open source, he started to work on ECS ComposeX to contribute back to these communities to help with AWS services adoption and to simplify implementation of best practices.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.