AWS Feed
Building serverless applications with streaming data: Part 2
Part 1 introduces the Alleycat application that allows bike racers to compete with each other virtually on home exercise bikes. I explain the application’s functionality, how to deploy to your AWS account, and provide an architectural review.
This series is about building serverless solutions in streaming data workloads. These are traditionally challenging to build, since data can be streamed from thousands or even millions of devices continuously.
In the example scenario, there are 40,000 users and up to 1,000 competitors may race at any given time. The workload must continuously ingest and buffer this data, then process and analyze the information to provide analytics and leaderboard content for the frontend application.
In this post, I focus on data ingestion. I compare the two different methods used in Alleycat, and discuss other approaches available. This post refers to Amazon Kinesis Data Streams, the AWS SDK, and AWS IoT Core in the solutions.
To set up the example, visit the GitHub repo and follow the instructions in the README.md file. Note that this walkthrough uses services that are not covered by the AWS Free Tier and incur cost.
Using AWS IoT Core to ingest streaming data
AWS IoT Core enables publish-subscribe capabilities for large numbers of client applications. Clients can send data to the backend using the AWS IoT Device SDK, which uses the MQTT standard for IoT messaging. After processing, the backend can publish aggregation and status messages back to the frontend via AWS IoT Core. This service fans out the messages to clients using topics.
When using this approach, note the Quality of Service (QoS) options available. By default, the SDK uses QoS level 0, which means the device does not confirm the message is received. This is intended for workloads that can lose messages occasionally without impacting performance. In Alleycat, if performance metrics are sometimes lost, this does not likely impact the overall end user experience.
For workloads requiring higher reliability, use QoS level 1, which causes the SDK to resend the message until an acknowledgement is received. While there is no additional charge for using QoS level 1, it generally increases the number of messages, which increases the overall cost. You are not charged for the PUBACK acknowledgement message – for more details, read more about AWS IoT Core pricing.
Frontend
In this scenario, the Alleycat frontend application is running on a physical exercise bike. The user selects a racer ID and exercise class and chooses Start Race to join the current virtual race for that class.
Every second, the frontend sends a message containing the cadence and resistance metrics and the current second in the race for the local racer. This message is created as a JSON object in the Home.vue component and sent to the ‘alleycat-publish’ topic:
const message = { uuid: uuidv4(), event: this.event, deviceTimestamp: Date.now(), second: this.currentSecond, raceId: RACE_ID, name: this.racer.name, racerId: this.racer.id, classId: this.selectedClassId, cadence: this.racer.getCurrentCadence(), resistance: this.racer.getCurrentResistance }
The IoT.vue component contains the logic for this integration and uses the AWS IoT Device SDK to send and receive messages. On startup, the frontend connects to AWS IoT Core and publishes the messages using an MQTT client:
bus.$on('publish', (data) => { console.log('Publish: ', data) mqttClient.publish(topics.publish, JSON.stringify(data)) })
The SDK automatically attempts to retry in the event of a network disconnection and exposes an error handler to allow custom logic if other errors occur.
Backend
The resources used in the backend are defined using the AWS Serverless Application Model (AWS SAM) and configured in the core setup templates:
Messages are published to topics in AWS IoT Core, which act as channels of interest. The message broker uses topic names and topic filters to route messages between publishers and subscribers. Incoming messages are routed using rules. Alleycat’s IoT rule routes all incoming messages to a Kinesis stream:
IotTopicRule: Type: AWS::IoT::TopicRule Properties: RuleName: 'alleycatIngest' TopicRulePayload: RuleDisabled: 'false' Sql: "SELECT * FROM 'alleycat-publish'" Actions: - Kinesis: StreamName: 'alleycat' PartitionKey: "${timestamp()}" RoleArn: !GetAtt IoTKinesisRole.Arn IoTKinesisRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: Service: - iot.amazonaws.com Action: - 'sts:AssumeRole' Path: / Policies: - PolicyName: IoTKinesisPutPolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: 'kinesis:PutRecord' Resource: !GetAtt KinesisStream.Arn
Using the AWS::IoT::TopicRule resource, you can optionally define an error action. This allows you to store messages in a durable location, such as an Amazon S3 bucket, if an error occurs. Errors can occur if a rule does not have permission to access a destination or throttling occurs in a target.
Rules can route matching messages to up to 10 targets. For debugging purposes, you can also enable Amazon CloudWatch Logs, which can help in troubleshoot failed message deliveries. The AWS IoT Core Message Broker allows up to 20,000 publish requests per second – if you need a higher limit for your workload, submit a request to AWS Support.
Using the AWS SDK to ingest streaming data
The Alleycat frontend creates traffic for a single user but there is also a simulator application that can generate messages for up to 1,000 riders. Instead of routing messages using an MQTT client, the simulator uses the AWS SDK to put messages directly into the Kinesis data stream.
The SDK provides a service interface object for Kinesis and two API methods for putting messages into streams: putRecord and putRecords. The first option accepts only a single message but the second enables batching of up to 500 messages per request. This is the preferred option for adding multiple messages, compared with calling putRecord multiple times.
The putRecords API takes parameters as a JSON array of messages:
const params = { StreamName: 'alley-cat', [{ "Data":"{"event":"update","deviceTimestamp":1620824038331,"second":3,"raceId":5402746,"name":"Hayden","racerId":0,"classId":1,"cadence":79.8,"resistance":79}", "PartitionKey":"1620824038331" }, { "Data":"{"event":"update","deviceTimestamp":1620824038331,"second":3,"raceId":5402746,"name":"Hubert","racerId":1,"classId":1,"cadence":60.4,"resistance":60.6}", "PartitionKey":"1620824038331" }
]}
The SDK automatically base64 encodes the Data attribute, which in this case is the JSON string output from JSON.stringify. In the JavaScript SDK, the putRecords API can return a promise, allowing the code to await the operation:
const result = await kinesis.putRecords(params).promise()
Shards and partition keys
Kinesis data streams consist of one or more shards, which are sequences of data records with a fixed capacity. Each shard can support up to 1,000 records per second for writes, up to maximum total data write rate of 1MB per second. The total capacity of a stream is the total of its shards.
When you send messages to a stream, the partitionKey attribute determines which shard it is routed to. The example application configures a Kinesis data stream with a single shard so the partitionKey attribute has no effect – all messages are routed to the same shard. However, many production applications have more than one shard and use the partitionKey to assign messages to shards.
The partitionKey is hashed by the Kinesis service to route to a shard. This diagram shows how partitionKey values from data producers are hashed by an MD5 function and mapped to individual shards:
While you cannot designate a specific shard ID in a message, you can influence the assignment depending on your choice of partitionKey:
- Random: Using a randomized value results in random hash so messages are randomly sent to different shards. This effectively load balances messages across all available shards.
- Time-based: A timestamp value may cause groups of messages sent to a single shard, if the messages arrive at the same time. The identical timestamp results in an identical hash.
- Application-specific: if Alleycat used the classID as a partitionKey, racers in each class would always be routed to the same shard. This could be useful for downstream aggregation logic but would limit the capacity of messages per classID.
Optimizing capacity in a shard
Each shard can ingest data at a rate of 1 MB per second or 1,000 records per second, whichever limit is reached first. Since the payload maximum is 1MB, this could equate to one 1MB message per second. If the payload is larger, you must divide it into smaller pieces to avoid an error. For 1,000 messages, each payload must be under 1 KB on average to fit within the allowed capacity.
The combination of the two payload limits can result in different capacity profiles for a shard:
- The data payloads are evenly sized and use the 1 MB per second capacity.
- Data payload sizes vary, so the number of messages that can be packed into 1 MB varies per second.
- There are a large number of very messages, consuming all 1,000 messages per second. However, the total data capacity used is significantly less than 1 MB.
In the Alleycat application, the average payload size is around 170 bytes. When producing 1,000 messages a second, the workload is only using about 20% of the 1 MB per second limit. Since PUT payload size is a factor in Kinesis pricing, messages that are much smaller than 25 KB are less cost-efficient. Compare these two messaging patterns for the Alleycat application:
- In this default mode, a smaller message is published once per second. This reduces overall latency but results in higher overall messaging cost.
- The client application batches outgoing messages and sends to Kinesis every 5 seconds. This results in lower cost and better packing of messages, but introduces additional latency.
There is a tradeoff between cost and latency when optimizing a shard’s capacity and the decision depends upon the needs of your workload. If the client buffers messages, this adds latency on the client side. This is acceptable in many workloads that collect metrics for archival or asynchronous reporting purchases. However, for low-latency applications like Alleycat, it provides a better experience for the application user to send messages as soon as they are available.
Conclusion
This post focuses on ingesting data into Kinesis Data Streams. I explain the two approaches used by the Alleycat frontend and the simulator application and highlight other approaches that you can use. I show how messages are routed to shards using partition keys. Finally, I explore additional factors to consider when ingesting data, to improve efficiency and reduce cost.
Part 3 will cover using Amazon Kinesis Data Firehose for transforming, aggregating, and loading streaming data in data stores. This is used to provide the history second-by-second leaderboard for the frontend application.
For more serverless learning resources, visit Serverless Land.