AWS Feed
Enabling parallel file systems in the cloud with Amazon EC2 (Part I: BeeGFS)
This post was authored by AWS Solutions Architects Ray Zaman, David Desroches, and Ameer Hakme.
In this blog series, you will discover how to build and manage your own Parallel Virtual File System (PVFS) on AWS. In this post you will learn how to deploy the popular open source parallel file system, BeeGFS, using AWS D3en and I3en EC2 instances. We will also provide a CloudFormation template to automate this BeeGFS deployment.
A PVFS is a type of distributed file system that distributes file data across multiple servers and provides concurrent data access to multiple execution tasks of an application. PVFS focuses on high-performance access to large datasets. It consists of a server process and a client library, which allows the file system to be mounted and used with standard utilities. PVFS on the Linux OS originated in the 1990’s and today several projects are available including Lustre, GlusterFS, and BeeGFS. Workloads such as shared storage for video transcoding and export, batch processing jobs, high frequency online transaction processing (OLTP) systems, and scratch storage for high performance computing (HPC) benefit from the high throughput and performance provided by PVFS.
Implementation of a PVFS can be complex and expensive. There are many variables you will want to take into account when designing a PVFS cluster including the number of nodes, node size (CPU, memory), cluster size, storage characteristics (size, performance), and network bandwidth. Due to the difficulty in estimating the correct configuration, systems procured for on-premises data centers are typically oversized, resulting in additional costs, and underutilized resources. In addition, the hardware procurement process is lengthy and the installation and maintenance of the hardware adds additional overhead.
AWS makes it easy to run and fully manage your parallel file systems by allowing you to choose from a variety of Amazon Elastic Compute Cloud (EC2) instances. EC2 instances are available on-demand and allow you to scale your workload as needed. AWS storage-optimized EC2 instances offer up to 60 TB of NVMe SSD storage per instance and up to 336 TB of local HDD storage per instance. With storage-optimized instances, you can easily deploy PVFS to support workloads requiring high-performance access to large datasets. You can test and iterate on different instances to find the optimal size for your workloads.
D3en instances leverage 2nd-generation Intel Xeon Scalable Processors (Cascade Lake) and provide a sustained all core frequency up to 3.1 GHz. These instances provide up to 336 TB of local HDD storage (which is the highest local storage capacity in EC2), up to 6.2 GiBps of disk throughput, and up to 75 Gbps of network bandwidth.
I3en instances are powered by 1st or 2nd generation Intel® Xeon® Scalable (Skylake or Cascade Lake) processors with 3.1 GHz sustained all-core turbo performance. These instances provide up to 60 TB of NVMe storage, up to 16 GB/s of sequential disk throughput, and up to 100 Gbps of network bandwidth.
BeeGFS, originally released by ThinkParQ in 2014, is an open source, software defined PVFS that runs on Linux. You can scale the size and performance of the BeeGFS file-system by configuring the number of servers and disks in the clusters up to thousands of nodes.
BeeGFS architecture
D3en instances offer HDD storage while I3en instances offer NVMe SSD storage. This diversity allows you to create tiers of storage based on performance requirements. In the example presented in this post you will use four D3en.8xlarge (32 vCPU, 128 GB, 16x14TB HDD, 50 Gbit) and two I3en.12xlarge (48 vCPU, 384 GB, 4 x 7.5-TB NVMe) instances to create two storage tiers. You may choose different sizes and quantities to meet your needs. The I3en instances, with SSD, will be configured as tier 1 and the D3en instances, with HDD, will be configured as tier 2. One disk from each instance will be formatted as ext4 and used for metadata while the remaining disks will be formatted as XFS and used for storage. You may choose to separate metadata and storage on different hosts for workloads where these must scale independently. The array will be configured RAID 0, since it will provide maximum performance. Software replication or other RAID types can be employed for higher durability.
You will deploy all instances within a single VPC in the same Availability Zone and subnet to minimize latency. Security groups must be configured to allow the following ports:
- Management service (beegfs-mgmtd): 8008
- Metadata service (beegfs-meta): 8005
- Storage service (beegfs-storage): 8003
- Client service (beegfs-client): 8004
You will use the Debian Quick Start Amazon Machine Image (AMI) as it supports BeeGFS. You can enable Amazon CloudWatch to capture metrics.
How to deploy the BeeGFS architecture
Follow the steps below to create the PVFS described above. For automated deployment, use the CloudFormation template located at AWS Samples.
- Use the AWS Management Console or CLI to deploy one D3en.8xlarge instance into a VPC as described above.
- Log in to the instance and update the system:
sudo apt update
sudo apt upgrade
- Install the XFS utilities and load the kernel module:
sudo apt-get -y install xfsprogs
sudo modprobe -v xfs
Format the first disk ext4 as it is used for metadata, the rest are formatted xfs. The disks will appear as “nvme???” which actually represent the HDD drives on the D3en instances.
4. View a listing of available disks:
-
sudo lsblk
5. Format hard disks:
-
sudo mkfs -t ext4 /dev/nvme0n1
sudo mkfs -t xfs /dev/nvme1n1
- Repeat this command for disks nvme2n1 through nvme15n1
6. Create file system mount points:
-
sudo mkdir /disk00
sudo mkdir /disk01
- Repeat this command for disks disk02 through disk15
7. Mount the filesystems:
-
sudo mount /dev/nvme0n1 /disk00
sudo mount /dev/nvme0n1 /disk01
- Repeat this command for disks disk02 through disk15
Repeat steps 1 through 7 on the remaining nodes. Remember to account for fewer disks for i3en.12xlarge instances or if you decide to use different instance sizes.
8. Add the BeeGFS Repo to each node:
-
sudo apt-get -y install gnupg
wget https://www.beegfs.io/release/beegfs_7.2.3/dists/beegfs-deb10.list
sudo cp beegfs-deb10.list /etc/apt/sources.list.d/
sudo wget -q https://www.beegfs.io/release/latest-stable/gpg/DEB-GPG-KEY-beegfs -O- | sudo apt-key add -
sudo apt update
9. Install BeeGFS management (node 1 only):
-
sudo apt-get -y install beegfs-mgmtd
sudo mkdir /beegfs-mgmt
sudo /opt/beegfs/sbin/beegfs-setup-mgmtd -p /beegfs-mgmt/beegfs/beegfs_mgmtd
10. Install BeeGFS metadata and storage (all nodes):
-
sudo apt-get -y install beegfs-meta beegfs-storage beegfs-meta beegfs-client beegfs-helperd beegfs-utils
# -s is unique ID based on node - change this!, -m is hostname of management server
sudo /opt/beegfs/sbin/beegfs-setup-meta -p /disk00/beegfs/beegfs_meta -s 1 -m ip-XXX-XXX-XXX-XXX
# Change -s to nodeID and -i to (nodeid)0(disk), -m is hostname of management server
sudo /opt/beegfs/sbin/beegfs-setup-storage -p /disk01/beegfs_storage -s 1 -i 101 -m ip-XXX-XXX-XXX-XXX
sudo /opt/beegfs/sbin/beegfs-setup-storage -p /disk02/beegfs_storage -s 1 -i 102 -m ip-XXX-XXX-XXX-XXX
- Repeat this last command for the remaining disks disk03 through disk15
11. Start the services:
-
#Only on node1
sudo systemctl start beegfs-mgmtd
#All servers
sudo systemctl start beegfs-meta
sudo systemctl start beegfs-storage
At this point, your BeeGFS cluster is running and ready for use by a client system. The client system requires BeeGFS client software in order to mount the cluster.
12. Deploy an m5n.2xlarge instance into the same subnet as the PVFS cluster.
13. Log in to the instance, install, and configure the client:
-
sudo apt update
sudo apt upgrade
sudo apt-get -y install gnupg
#Need linux sources for client compilation
sudo apt-get -y install linux-source
sudo apt-get -y install linux-headers-4.19.0-14-all
wget https://www.beegfs.io/release/beegfs_7.2.3/dists/beegfs-deb10.list
sudo cp beegfs-deb10.list /etc/apt/sources.list.d/
sudo wget -q https://www.beegfs.io/release/latest-stable/gpg/DEB-GPG-KEY-beegfs -O- | sudo apt-key add -
sudo apt update
sudo apt-get -y install beegfs-client beegfs-helperd beegfs-utils
sudo /opt/beegfs/sbin/beegfs-setup-client -m ip-XXX-XXX-XXX-XX # use the ip address of the management node
sudo systemctl start beegfs-helperd
sudo systemctl start beegfs-client
14. Create the storage pools:
-
sudo beegfs-ctl --addstoragepool —desc="tier1" —targets=501,502,503,601,602,603
sudo beegfs-ctl --addstoragepool --desc="tier2" --targets=101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,201,202,203,204,205,206,207,208,209,210,
211,212,213,214,215,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,401,402,403,404,405,406,407,
408,409,410,411,412,413,414,415sudo beegfs-ctl --liststoragepools
Pool ID Pool Description Targets Buddy Groups
======= ================== ============================ ============================
Default
tier1 501,502,503,601,602,603
tier2 101,102,103,104,105,106,107,
108,109,110,111,112,113,114,
115,201,202,203,204,205,206,
207,208,209,210,211,212,213,
214,215,301,302,303,304,305,
306,307,308,309,310,311,312,
313,314,315,401,402,403,404,
405,406,407,408,409,410,411,
412,413,414,415
15. Mount the pools to the file system:
-
sudo beegfs-ctl --setpattern --storagepoolid=2 /mnt/beegfs/tier1
sudo beegfs-ctl --setpattern --storagepoolid=3 /mnt/beegfs/tier2
The BeeGFS PVFS is now ready to be used by the client system.
How to test your new BeeGFS PVFS
BeeGFS provides StorageBench to evaluate the performance of BeeGFS on the storage targets. This benchmark measures the streaming throughput of the underlying file system and devices independent of the network performance. To simulate client I/O, this benchmark generates read/write locally on the servers without any client communication.
It is possible to benchmark specific targets or all targets together using the “servers” parameter. A “read” or “write” parameter sets the type pf test to perform. The “threads” parameter is set to the number of storage devices.
Try the following commands to test performance:
Write test (1x d3en):
sudo beegfs-ctl --storagebench --servers=1 --write --blocksize=512K —size=20G —threads=15
Write test (4x d3en):
sudo beegfs-ctl --storagebench --alltargets --write --blocksize=512K —size=20G —threads=15
Read test (4x d3en):
sudo beegfs-ctl --storagebench --servers=1,2,3,4 --read --blocksize=512K --size=20G --threads=15
Write test (1x i3en):
sudo beegfs-ctl --storagebench --servers=5 --write --blocksize=512K --size=20G --threads=3
Read test (2x i3en):
sudo beegfs-ctl --storagebench --servers=5,6 --read --blocksize=512K —size=20G —threads=3
StorageBench is a great way to test what the potential performance of a given environment looks like by reducing variables like network throughput and latency, but you may want to test in a more real-world fashion. For this, tools like ‘fio’ can generate mixed read/write workloads against files on the client BeeGFS mountpoint.
First, we need to define which directory goes to which Storage Pool (tier) by setting a pattern:
sudo beegfs-ctl --setpattern --storagepoolid=2 /mnt/beegfs/tier1 sudo beegfs-ctl --setpattern --storagepoolid=3 /mnt/beegfs/tier2
You can see how a file gets striped across the various disks in a pool by adding a file and running the command:
sudo beegfs-ctl —getentryinfo /mnt/beegfs/tier1/myfile.bin
Install fio:
sudo apt-get install -y fio
Now you can run a fio test against one of the tiers. This example command runs eight threads running a 75/25 read/write workload against a 10-GB file:
sudo fio --numjobs=8 --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/mnt/beegfs/tier1/test --bs=512k --iodepth=64 --size=10G --readwrite=randrw --rwmixread=75
Cleaning up
To avoid ongoing charges for resources you created, you should:
- Shut down all seven EC2 instances.
- Delete any security groups you created.
- Delete any subnets you created.
- Delete the VPC you created.
Conclusion
In this blog post we demonstrated how to build and manage your own BeeGFS Parallel Virtual File System on AWS. In this example, you created two storage tiers using the I3en and D3en. The I3en was used as the first tier for SSD storage and the D3en was used as a second tier for HDD storage. By using two different tiers, you can optimize performance to meet your application requirements.
Amazon EC2 storage-optimized instances make it easy to deploy the BeeGFS Parallel Virtual File System. Using combinations of SSD and HDD storage available on the I3en and D3en instance types, you can achieve the capacity and performance needed to run the most demanding workloads. Read more about the D3en and I3en instances.