AWS Feed
Open source mobile core network implementation on Amazon Elastic Kubernetes Service

As introduced in Amazon Web Services (AWS) whitepapers, Carrier-grade Mobile Packet Core Network on AWS and 5G Network Evolution with AWS, implementing 4G Evolved Packet Core (EPC) and 5G Core (5GC) on AWS can bring a significant value and benefit, such as scalability, flexibility, and programmable orchestration, as well as automation of the underlying infrastructure layer.

This blog post focuses on practical implementation steps for creating a 4G core network using the open source project Open5gs.

In addition to showing the benefit of easy installation steps, we introduce how the following AWS services can help the mobile packet network operate efficiently in the cloud environment: Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Route 53 (DNS service), Amazon DocumentDB, Amazon Elastic Container Registry (Amazon ECR), AWS CloudFormation, Amazon CloudWatch, and AWS Lambda.

This generalized example of open source-based 4G core network implementation gives guidance for mobile network function developers and also can be a relevant tool for developers of orchestration, service assurance, and Operation Support System (OSS) solutions that require a general example of mobile packet core network running on AWS.

Time to read About 10-15 minutes
Time to complete About 45-60 minutes
Cost to complete (estimated) $489 (for a month, on-demand instance cost based)
Learning level Advanced (300)
Services used AWS CloudFormation, Amazon Elastic Kubernetes Service, Amazon DocumentDB, AWS Lambda, Amazon CloudWatch

Solution overview

In this implementation, we have chosen Open5gs as a sample mobile packet core application. The Open5gs is an open source project that provides 4G and 5G mobile packet core network functionalities for building a private LTE/5G network under the GNU AGPL 3.0 license. Currently, it supports 3GPP Release 16 with providing 5G Core (AMF, SMF+PGW-c, UPF+PGW-u, PCF, UDR, UDM, AUSF, NRF) network functions and Evolved Packet Core (MME, SGW-c, SGW-u, HSS, and PCRF) network functions.

Among the components in Open5gs, the network function applications in the following table are only used for a 4G EPC network demonstration, with having a 3GPP logical interface in the diagram. Note that even the Network Repository Function (NRF) is a 5G-only network function. It is introduced to use SMF and UPF, which play a role as PGW-c and PGW-u in the Open5gs project.

Network Function Role
MME Mobility Management Entity
HSS Home Subscriber Server
PCRF Policy and Charging Rules Function
SGW-c Serving Gateway Control Plane
SGW-u Serving Gateway User Plane
SMF+PGW-c Session Management Function + PDN Gateway Control Plane
UPF+PGW-u User Plane Function + PDN Gateway User Plane
NRF Network Repository Function (it is only for NF registration of 5G functions)
Web-UI GUI to configure subscriber and its profile for HSS/PCRF

Workflow diagram outlining the network functions and roles.

If we use container-based network functions on Kubernetes (K8s), we can generally standardize a deployment process of these network functions in the flow of VPC creation→EKS Cluster and worker node creation→Helm deployment→CNF configuration, as in the following diagram, which can be automated with various automation tools and scenario.

Diagram illustrating the deployment process of the network functions.

In this example, we use a CloudFormation to create an Amazon Virtual Private Cloud (VPC), an Amazon EKS cluster, and two worker node groups (one for the 3GPP control plane, the other for the 3GPP user plane). Importantly, when we deploy these types of open source EPC/5GC on EKS, because they are mostly using multiple network interfaces to serve all different protocols at each interface with having network separation, we have to leverage the Multus CNI Plugin. As guided in AWS GitHub, we can automate this process through AWS Lambda function and Amazon CloudWatch Event Rule. The bottom line is that two AWS CloudFormation templates create the following resources:

  • Infrastructure creation template
    • EpcVpc: A VPC that will be used for the deployment.
    • PublicSubnet1/2: These subnets will host the bastion host for kubectl command run with having public internet access. Also, this will host the NAT-GW to provide internet access for the private subnets.
    • PrivateSubnetAz1/2: Subnets for the EKS control-plane in AZ1 and AZ2.
    • MultusSubnet1Az1: The first subnet that Multus will use to create secondary interfaces in the EPC control plane pods.
    • MultusSubnet2Az1: The second subnet that Multus will use to create secondary interfaces in the EPC user plane pods.
    • EksCluster: EKS cluster that will host network functions.
    • DocumentDBCluster: For profile store of subscribers, Open5gs originally used MongoDB for HSS and PCRF. In this implementation, Amazon DocumentDB is facilitated because DocumentDB has full compatibility with MongoDB.
    • Route53 Private Hosted Zones: For the discovery of service interfaces, such as S6a, Gx, S11, S5-c/u IP addresses, Amazon Route 53 is facilitated as one central DNS.
  • EKS worker node group creation template
    • Worker node group for control plane network functions, such as MME, SGW-c, SMF, etc., with additional control plane subnet network.
    • Worker node group for user plane network functions, such as SGW-u and UPF, with additional control plane subnet and user plane subnet networks.
    • Lambda function for attaching additional Multus subnet networks to worker node groups.
    • CloudWatch Event Rule for monitoring instance scaling up and down to trigger Lambda hook to attach additional Multus networks to worker node groups.

EKS worker node group creation template.

Additionally, two more controllers have been developed and introduced for the further steps of automation.

  • DNS update controller: While we use Amazon Route 53 to resolve the service IP given to the Multus interface, we also created a controller to register this service IP to respective Route 53 private hosted zone automatically. Each EPC service interface uses a separate DNS private hosted zone, created by the open5gs-infra CFN template.
  • Multus IP update controller: The other controller is used to associate the Multus secondary IPs to the EC2 instance in which the pod is running. The controller listens for pods with designated annotations, and it searches for the secondary IPs and then calls Amazon EC2 API to associate the IP at the Multus interface of the pod to the respective ENI of the host instance. It also disassociates the IP from the host ENI when the POD gets deleted.

After a successful deployment of Open5gs, the functionality of the 4G core network can be tested with other tester or simulators. In this article, we have used srsLTE simulator as an example, but it can be chosen according to the user’s preference.

Walkthrough

Summary of installation steps:

  1. Run the CloudFormation for infra creation (open5gs-infra.yaml).
  2. Bastion host configuration and K8s ConfigMap update.
  3. DocumentDB initialization.
  4. CoreDNS ConfigMap update to use Route 53 for 3GPP service interfaces.
  5. Run the CloudFormation for Multus worker node group creation (open5gs-worker.yaml).
  6. DNS controller and Multus-IP update controller deployment for the automation.
  7. Run shell script for cluster initialization (setting up namespace, etc.).
  8. Helm installation for all network functions.

Refer to the GitHub repo throughout this tutorial.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  1. An AWS account.
  2. Download GitHub repo to your local machine to build images.
  3. You have to compile Docker images of Open5gs and DNS/Multus-IP controllers and then upload them to your ECR.
    • Container images: Docker files for the application components are in the Dockerfiles sub-folder in the GitHub repo for each processor architecture (ARM-Architecture and x86-Architecture folders) in the GitHub repository. Especially ARM-based files can be used with AWS Graviton2 instance, which can deliver the best price performance.
    • Note that Dockerfiles for Open5gs components were created from the master because of a glitch that occurs when Open5gs is deployed using containers in version 2.0.22. The commit that was used is 41fd851 or you can use a version that is higher than v2.0.22.
  4. Basic understanding of AWS services, such as CloudFormation, VPC, and EKS.
  5. Please be mindful that some services such as EKS and DocumentDB used in this example incur a service charge.

Detailed implementation steps

You can refer to the service documentation topics for basic procedures or more information.

Run the CloudFormation for infra creation (open5gs-infra.yaml)

  1. Log in to AWS Console, CloudFormation service.
  2. Run the infra template.

Bastion host configuration and K8s ConfigMap update

  1. Kubectl installation as outlined in the user guide.
  2. Helm version3 installation is also required.
  3. AWS credential configuration at the instance as outlined in the user guide.
  4. Update kubeconfig at bastion to communicate with the created EKS cluster.
    aws eks update-kubeconfig --name eks-Open5gsInfra
  5. Run ConfigMap update, so that Lambda SAR application can work for worker node group’s automatic joining.
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aws-auth
    namespace: kube-system
    data:
    mapRoles: |
    - rolearn: /**ARN of EKSAdminRoleForLambda (can be found in Output of infra stack)**/
    username: ops-user
    groups:
    - system:masters
    EOF
  6. Having Git clone of the repo will help you later when executing Helm installation at this bastion host.

DocumentDB initialization

  1. Log in to AWS Console, DocumentDB service.
  2. The DocumentDB cluster needs to be initialized before Open5gs can use it. This is done by creating an “open5gs” database in the cluster, which can be done via the bastion host. More information on how to install the Mongo client can be found in the documentation. To create a database, please refer to the basics guide.

CoreDNS ConfigMap update for 3GPP service interfaces in Route 53

  1. Update the cluster coreDNS ConfigMap with the Route 53 zones that were created by the CloudFormation template. Below is the sample entry that needs to be added (the zone ID should be replaced with yours). Note that coreDNS pods need to be restarted for the Route 53 configuration to take effect.
 s6a.hss.open5gs.service { route53 s6a.hss.open5gs.service.:Z04391521Q1O3218R9ICQ } s6a.mme.open5gs.service { route53 s6a.mme.open5gs.service.:Z021624132KA5KGP80CIJ } gx.pcrf.open5gs.service { route53 gx.pcrf.open5gs.service.:Z001178610405PSM3LXE2 } s11.sgwc.open5gs.service { route53 s11.sgwc.open5gs.service.:Z0012154326705JM1A2HA } sx.sgwu.open5gs.service { route53 sx.sgwu.open5gs.service.:Z02156012ZREZXLVAQGCF } s5.smf.open5gs.service { route53 s5.smf.open5gs.service.:Z021666966S8THLJAKYF } sx.upf.open5gs.service { route53 sx.upf.open5gs.service.:Z04729463PFYEQGVMLX4R }

Run the CloudFormation for worker node group creation (open5gs-worker.yaml)

  1. Log in to AWS Console, CloudFormation service.
  2. Run the worker node group template. At this point, you’ll need to specify stack name you have used for infrastructure creation so that it can be properly referred to.
  3. Optional: If worker node groups don’t get joined to the EKS cluster, then manually update aws-auth ConfigMap so that the EKS cluster control plane can register the worker nodes. (This step is usually not required if a step of Bastion Host Configuration-4 has been done properly.)

Staging environment

  1. Edit controllers/deployments/aws-secondary-int-controller-deployment.yaml, controllers/deployments/svc-watcher-route53-deployment.yaml files to point to ECR repo of your controller image, which has been done in the prerequisite step.
  2. Run the ./cluster_initializer.sh (this must be done in the root folder of the repo) to install the prerequisite Kubernetes resources, such as the Open5gs namespace, Multus-daemonset, Multus network attachment definitions, service discovery, and secondary interface controllers. The service discovery and secondary interface controllers are installed in the kube-system namespace. You must run this script before installing the Open5gs via the Helm chart.
  3. Install CloudWatch Container Insight for the container monitoring and log collection. For installation details, refer to the setup guide.

Helm deployment

  1. Edit image repo to be pointing your ECR images in values.yaml (open5gs.image.repository, open5gs.image.tag, webui.image.repository, webui.image.tag).
  2. Install Helm chart with the following command:
    helm -n open5gs install -f values.yaml epc-core ./
  3. Wait for all the pods to be running state; this can take around 5-10 minutes. During this time, some pods will be restarted more than once, which is expected behavior during Route 53 and IP update.
    >> kubectl get -n open5gs pods
    NAME READY STATUS RESTARTS AGE
    open5gs-hss-deployment-54f7b8b47f-rpvvv 1/1 Running 3 5m23s
    open5gs-mme-deployment-657bdc78fc-hstkb 1/1 Running 3 5m24s
    open5gs-nrf-deployment-b5d6b7755-cc275 1/1 Running 0 5m23s
    open5gs-pcrf-deployment-f9bc9b648-jhktg 1/1 Running 3 5m24s
    open5gs-sgwc-deployment-7f4d4b7f86-xcvw5 1/1 Running 0 5m23s
    open5gs-sgwu-deployment-59656b6b8-z2ht9 1/1 Running 0 5m23s
    open5gs-smf-deployment-74659f5849-jnr7z 1/1 Running 3 5m25s
    open5gs-upf-deployment-5f9f944b49-zcq5l 1/1 Running 0 5m24s
    open5gs-webui-6679c7c99f-vhkw6 1/1 Running 0 5m25s >> kubectl logs -n open5gs open5gs-mme-deployment-657bdc78fc-hstkb
    Open5GS daemon v2.1.1-5-gefd1780 01/07 08:04:52.849: [app] INFO: Configuration: '/open5gs/config-map/mme.yaml' (../lib/app/ogs-init.c:129)

Screenshot of the map view of resources within the Container Insights within CloudWatch.

Screenshot displaying log records.

Verifying the whole setup

  1. To test the environment, we can use any LTE UE/eNB emulator from open source or AWS partners.
  2. In case of using srsLTE, we can verify the below result with the EPC core network created on EKS. (Note that the simulator must have matching configuration of subscriber profile (IMSI, OPc, Kval), MCC, MNC, and APN configurations with one configured to the core network on EKS).

Screenshot of output when testing the setup.

Screenshot of output when testing the setup in the emulator.

Clean up

To avoid incurring future charges, we can delete all resources created using CloudFormation service. We can go back to AWS console, CloudFormation service, and delete stacks (in the order of worker-node stack and infra stack) one by one.

Conclusion

In this blog post, we’ve shown the benefit and power of using AWS for implementing a mobile packet core network by demonstrating how to easily set up the environment without any hardware, additional database, and underlying infrastructure preparation. This standardized process can be further automated with the use of AWS Cloud Development Kit (AWS CDK) and AWS CodePipeline or with third-party tools and an orchestrator through the API integration.