AWS Feed
Simplifying Kubernetes configurations using AWS Lambda

In this blog post, we explain how to create a multi-stage Dockerfile that uses eksctl, kubectl, and aws-auth. This will allow you to call Kubernetes APIs to create and manage resources through a unified control plane. You will interact with the Kubernetes API using Python, and the config map is created using a Jinja2 template. This provides a solution that simplifies the user experience by allowing you to manage a Kubernetes cluster without installing multiple tools on your local developer machine. Additionally, this solution removes the complexities of additional domain-specific language knowledge and reduces the dependencies and packages installed on a local machine.

The problem

In today’s Kubernetes environment, multiple tool sets and different platforms silo features between the multiple projects, increasing the complexity for customers to decide between new technologies. Developers are faced with an ever-increasing demand to learn new domain-specific languages rather than focusing on the end products. The solution described in this post can be applied to all Kubernetes configurations; however, for this post, we will be exploring the use case: Updating the Amazon Elastic Kubernetes Service (Amazon EKS) aws-auth config mapping.

Use case

Adding users and roles to the existing Kubernetes configmap.yml at scale:

Method 1:

Develop a script, such as Python, to generate a configmap.yml using a predefined template, such as Jinja2. Then use kubectl to apply it to Kubernetes using the command:

kubectl apply -f 

Method 2:

Use eksctl commands to add one developer at a time using the following command:

eksctl create iamidentitymapping 
--cluster 
--arn 
--username 
--group 

Issues

  1. Both methods require manual intervention.
  2. Currently there is no Terraform or AWS CloudFormation support for modifying or updating the underlying configuration of Amazon EKS clusters.
  3. Teams must create and allow human access to a privileged AWS Identity and Access Management (IAM) user/role to update the role’s permissions.
  4. These privileged users create a bottleneck for updating the cluster.
  5. There is potential for human error to misconfigure cluster permissions.

Recommended solution

Let’s walk through how we can simplify this tool set to become an API call designed specifically for the environment by using open source tools, such as kubectl, eksctl, AWS Command Line Interface (AWS CLI), Python, Jinja2, and custom Docker container images.

The solution uses container-images for AWS Lambda to create a Docker container using multi-stage builds and lightweight operating system image builds, such as Alpine, to reduce the attack surface by using what is needed to run the code. This method also allows for the Lambda builds to be declared as infrastructure as code (IaC) and therefore can be version controlled, as compared to using an AWS Lambda layer.

This Lambda function automates the process through which you would manually do the following command on a live Kubernetes system.

kubectl edit configmap -n kube-system aws-auth

JSON to AWS Lambda Update ConfigMap to VPS Amazon EKS

Prerequisites

  • Docker desktop locally installed and running for packaging the container image.
  • AWS CLI locally installed for programmatic interaction with AWS.
  • The following AWS resources are required. Refer to the GitHub repository for all code samples.

AWS resources:

  1. AWS IAM resources:
    • Lambda role
    • Lambda permissions for Amazon EKS
  2. Amazon Elastic Container Registry (Amazon ECR)
  3. Elastic Kubernetes Cluster
  4. Lambda role authorized for Amazon EKS administration

AWS CLI commands for creating the prerequisites

1a. Create Lambda role:

aws iam create-role  --role-name 
--assume-role-policy-document file://iam/lambda-trust-policy.json

1b. Create IAM policy:

aws iam create-policy 
--policy-name -policy 
--policy-document file://iam/lambda-role-permission.json

Note: If you receive the error, “An error occurred (MalformedPolicyDocument) when calling the CreatePolicy operation: The policy failed legacy parsing,” then update the lambda-role-permission.json with your account IDs.

Add basic Lambda execution role:

aws iam attach-role-policy 
--role-name 
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

Note: View the code within the GitHub repository. AWS best practices recommend reducing the IAM policy to meet your company’s requirements. These permissions are for demonstration only and are not production ready.

2a. Amazon Elastic Container Registry:

aws ecr create-repository 
--repository-name 

Authorization to push Docker images to Amazon ECR:

aws ecr get-login-password 
--region | docker login 
--username AWS 
--password-stdin .dkr.ecr..amazonaws.com

3a. Create Elastic Kubernetes Cluster:

eksctl create cluster 
--name demo-eks-cluster 
--version 1.20 
--nodegroup-name demo-managed-node-group 
--node-type t3.medium 
--nodes 2 
--region 
--enable-ssm

3b. Authorizing the Lambda role to administer the Amazon EKS cluster:

kubectl edit -n kube-system configmap/aws-auth

3c. Add the following:

- userarn: username: admin groups: - system:masters

GitHub repository contents

git clone https://github.com/aws-samples/kubernetes-configurations-apis-lambda

File directory layout

The file directory layout is constructed as follows, with three directories and five files:

.
├── Dockerfile
├── LICENSE
├── README.md
├── app
│ ├── app.py
│ └── templates
│ └── aws-auth.yaml.jinja
├── events
│ └── example-event.json
└── iam ├── lambda_role_permission.json └── lambda_trust_policy.json

Dockerfile

The Dockerfile layout is as follows:

  • Lines 1–6: Declare global arguments for all build stages; customize based on the customer’s needs.
  • Lines 8–15: Create a common base of required libraries for the toolset.
  • Lines 17–29: Take the binaries from the compiler stage and rename the next stage builder and stage libraries to run Python in Alpine.
  • Lines 34–66: Take fresh copy binaries from the compiler stage for the final stage of the image and install awscliv2, eksctl, and kubectl.
  • Line 68: Include the Python stage inside the final stage taking the required binaries.
  • Lines 70–71: Run the Python script inside the container image calling Kubernetes APIs to config aws-auth configmap.
01. ARG DIGEST="sha256:027ffb620da90fc79e1b62843b846400ac50b9bc8d87c53d7ba6d6b92b6f2b1d"
02. ARG DISTRO_VERSION="3.13"
03. ARG FUNCTION_DIR="/app"
04. ARG GLIBC_VER="2.31-r0"
05. ARG RUNTIME_VERSION="3.9.4"
06. ARG USER="ekscontainer"
07. 08. FROM python:${RUNTIME_VERSION}-alpine${DISTRO_VERSION}@${DIGEST} AS compiler
09.
10. RUN apk update && 
11. apk upgrade && 
12. apk add --no-cache 
13. libstdc++ 
14. binutils 
15. curl
16.
17. FROM compiler AS builder
18. ARG FUNCTION_DIR
19. WORKDIR ${FUNCTION_DIR}
20. 21. RUN apk add --no-cache 
22. build-base 
23. libtool 
24. autoconf 
25. automake 
26. libexecinfo-dev 
27. make 
28. cmake 
29. libcurl
30.
31. COPY app ${FUNCTION_DIR}
32. RUN python3 -m pip install --target ${FUNCTION_DIR} -r ${FUNCTION_DIR}/requirements.txt
33.
34. FROM compiler
35. ARG FUNCTION_DIR
36. ARG GLIBC_VER
37. ARG USER
38. WORKDIR $FUNCTION_DIR
39.
40. RUN curl -sL https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub -o /etc/apk/keys/sgerrand.rsa.pub && 
41. curl -sLO https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VER}/glibc-${GLIBC_VER}.apk && 
42. curl -sLO https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VER}/glibc-bin-${GLIBC_VER}.apk && 
43. curl -sLO https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VER}/glibc-i18n-${GLIBC_VER}.apk && 
44. apk add --no-cache 
45. glibc-${GLIBC_VER}.apk 
46. glibc-bin-${GLIBC_VER}.apk 
47. glibc-i18n-${GLIBC_VER}.apk && 
48. /usr/glibc-compat/bin/localedef -i en_US -f UTF-8 en_US.UTF-8 && 
49. curl -sL https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o awscliv2.zip && 
50. unzip awscliv2.zip && 
51. aws/install && 
52. curl -sL "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp && 
53. mv /tmp/eksctl /usr/local/bin && 
54. curl -sLO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && 
55. mv ./kubectl /usr/local/bin/kubectl && 
56. chmod a+x /usr/local/bin/kubectl && 
57. adduser -D ${USER} && 
58. rm -rf 
59. awscliv2.zip 
60. aws 
61. /usr/local/aws-cli/v2/*/dist/aws_completer 
62. /usr/local/aws-cli/v2/*/dist/awscli/data/ac.index 
63. /usr/local/aws-cli/v2/*/dist/awscli/examples 
64. glibc-*.apk && 
65. apk --no-cache del && 
66. rm -rf /var/cache/apk/*
67.
68. COPY --from=builder ${FUNCTION_DIR} ${FUNCTION_DIR}
69.
70. CMD [ "app.handler" ]
71. ENTRYPOINT ["/usr/local/bin/python", "-m", "awslambdaric"]

API call made

The API call made to the Lambda function has the following format.

Allowed RequestTypes are:

  • Create/Update
  • Delete
{ "RequestType" : "Create", "ResourceProperties" : { "ClusterName": "", "RoleMappings": [ { "arn": "", "username": "system:node:{{EC2PrivateDNSName}}", "groups": [ "system:bootstrappers", "system:nodes" ] } ... ] }
}

Actions

Create

To run, type the following command according to the Lambda documentation on creating the container-images for Lambda:

docker build -t blog-example:1.0 .;
docker tag blog-example:1.0 .dkr.ecr..amazonaws.com/:latest;
docker push :latest
aws lambda create-function 
--function-name 
--package-type Image 
--code ImageUri=:latest 
--role 

Verify

To verify the updates to the configmap, run the following command:

kubectl edit configmap -n kube-system aws-auth

and verify that the additional role mappings are added to your configmap.

Clean up

To clean up, run the following commands.

  1. To delete the Lambda function:
aws lambda delete-function 
--function-name 
  1. To delete the Amazon ECR and images:
aws ecr delete-repository 
--repository-name  --force
  1. To delete the Lambda IAM role:
aws iam delete-role 
--role-name 
  1. To delete the IAM policy:
aws iam delete-policy 
--policy-arn 
  1. To delete the Amazon EKS cluster:
eksctl delete cluster 
--region 

Summary

You now have a way to update your Amazon EKS clusters dynamically using a Lambda function rather than installing kubectl or eksctl on a local machine. Additionally, all container images are stored in a versioned format as infrastructure as code.