AWS Feed
Choosing between AWS services for streaming data workloads
Traditionally, streaming data can be complex to manage due to the large amounts of data arriving from many separate sources. Managing fluctuations in traffic and durably persisting messages as they arrive is a non-trivial task. Using a serverless approach, AWS provides a number of services that help manage large numbers of messages, alleviating much of the infrastructure burden.
In this blog post, I compare several AWS services and how to choose between these options in streaming data workloads.
Comparing Amazon Kinesis Data Streams with Amazon SQS queues
While you can use both services to decouple data producers and consumers, each is suited to different types of workload. Amazon SQS is primarily used as a message queue to store messages durably between distributed services. Amazon Kinesis is primarily intended to manage streaming big data.
Kinesis supports ordering of records and the ability for multiple consumers to read messages from the same stream concurrently. It also allows consumers to replay messages from up to 7 days previously. Scaling in Kinesis is based upon shards and you must reshard to scale a data stream up or down.
With SQS, consumers pull data from a queue and it’s hidden from other consumers until processed successfully (known as a visibility timeout). Once a message is processed, it’s deleted from the queue. A queue may have multiple consumers but they all receive separate batches of messages. Standard queues do not provide an ordering guarantee but scaling in SQS is automatic.
Amazon Kinesis Data Streams |
Amazon SQS |
|
Ordering guarantee | Yes, by shard | No for standard queues; FIFO queues support ordering by group ID. |
Scaling | Resharding required to provision throughput | Automatic for standard queues; up to 30,000 message per second for FIFO queues (more details). |
Exactly-once delivery | No | No for standard queues; Yes for FIFO queues. |
Consumer model | Multiple concurrent | Single consumer |
Configurable message delay | No | Up to 15 minutes |
Ability to replay messages | Yes | No |
Encryption | Yes | Yes |
Payload maximum | 1 MB per record | 256 KB per message |
Message retention period | 24 hours (default) to 365 days (additional charges apply) | 1 minute to 14 days. 4 days is the default |
Pricing model | Per shard hour plus PUT payload unit per million. Additional charges for some features | No minimum; $0.24-$0.595 per million messages, depending on Region and queue type |
AWS Free Tier included | No | Yes, 1 million messages per month – see details |
Typical use cases |
Real-time metrics/reporting Real-time data analytics Log and data feed processing Stream processing |
Application integration Asynchronous processing Batching messages/smoothing throughput |
Integration with Kinesis Data Analytics | Yes | No |
Integration with Kinesis Data Firehose | Yes | No |
While some functionality of both services is similar, Kinesis is often a better fit for many streaming workloads. Kinesis has a broader range of options for ingesting large amounts of data, such as the Kinesis Producer Library and Kinesis Aggregation Library. You can also use the PutRecords API to send up to 500 records (up to a maximum 5 MiB) per request.
Additionally, it has powerful integrations not available to SQS. Amazon Kinesis Data Analytics allows you to transform and analyze streaming data with Apache Flink. You can also use streaming SQL to run windowed queries on live data in near-real time. You can also use Amazon Kinesis Data Firehose as a consumer for Amazon Kinesis Data Streams, which is also not available to SQS queues.
Choosing between Kinesis Data Streams and Kinesis Data Firehose
Both of these services are part of Kinesis but they have different capabilities and target use-cases. Kinesis Data Firehose is a fully managed service that can ingest gigabytes of data from a variety of producers. When Kinesis Data Streams is the source, it automatically scales up or down to match the volume of data. It can optionally process records as they arrive with AWS Lambda and deliver batches of records to services like Amazon S3 or Amazon Redshift. Here’s how the service compares with Kinesis Data Streams:
Kinesis Data Streams |
Kinesis Data Firehose |
|
Scaling | Resharding required | Automatic |
Supports compression | No | Yes (GZIP, ZIP, and SNAPPY) |
Latency | ~200 ms per consumer (~70 ms if using enhanced fan-out) | Seconds (depends on buffer size configuration); minimum buffer window is 60 seconds |
Retention | 1–365 days | None |
Message replay | Yes | No |
Quotas | See quotas | See quotas |
Ingestion capacity | Determined by number of shards (1,000 records or 1 MB/s per shard) | No limit if source is Kinesis Data Streams; otherwise see quota page |
Producer types |
AWS SDK or AWS CLI Kinesis Producer Library Kinesis Agent Amazon CloudWatch Amazon EventBridge AWS IoT Core |
AWS SDK or AWS CLI Kinesis Producer Library Kinesis Agent Amazon EventBridge |
Number of consumers | Multiple, sharing 2 MB per second per shard throughput | One per delivery stream |
Consumers |
AWS Lambda Kinesis Data Analytics Kinesis Data Firehose Kinesis Client Library |
|
Pricing | Hourly charge plus data volume. Some features have additional charges – see pricing | Based on data volume, format conversion and VPC delivery – see pricing |
The needs of your workload determine the choice between the two services. To prepare and load data into a data lake or data store, Kinesis Data Firehose is usually the better choice. If you need low latency delivery of records and the ability to replay data, choose Kinesis Data Streams.
Using Kinesis Data Firehose to prepare and load data
Kinesis Data Firehose buffers data based on two buffer hints. You can configure a time-based buffer from 1-15 minutes and a volume-based buffer from 1-128 MB. Whichever limit is reached first causes the service to flush the buffer. These are called hints because the service can adjust the settings if data delivery falls behind writing to the stream. The service raises the buffer settings dynamically to allow the service to catch up.
This is the flow of data in Kinesis Data Firehose from a data source through to a destination, including optional settings for a delivery stream:
- The service continuously loads from the data source as it arrives.
- The data transformation Lambda function processes individual records and returns these to the service.
- Transformed records are delivered to the destination once the buffer size or buffer window is reached.
- Any records that could not be delivered to the destination are written to an intermediate S3 bucket.
- Any records that cannot be transformed by the Lambda function are written to an intermediate S3 bucket.
- Optionally, the original, untransformed records are written to an S3 bucket.
Data transformation using a Lambda function
The data transformation process enables you to modify the contents of individual records. Kinesis Data Firehose synchronously invokes the Lambda function with a batch of records. Your custom code modifies the records and then returns an array of transformed records.
The incoming payload provides the data attribute in base64 encoded format. Once the transformation is complete, the returned array must include the following attributes per record:
- recordId: This must match the incoming recordId to enable the service to map the new data to the record.
- result: “Ok”, “Dropped”, or “ProcessingFailed”. Dropped means that your logic has intentionally removed the record whereas ProcessingFailed indicates that an error has occurred.
- data: The transformed data must be base64 encoded.
The returned array must be the same length as the incoming array. The Alleycat example application uses the following code in the data transformation function to add a calculated field to the record:
exports.handler = async (event) => { const output = event.records.map((record) => { // Extract JSON record from base64 data const buffer = Buffer.from(record.data, 'base64').toString() const jsonRecord = JSON.parse(buffer) // Add the calculated field jsonRecord.output = ((jsonRecord.cadence + 35) * (jsonRecord.resistance + 65)) / 100 // Convert back to base64 + add a newline const dataBuffer = Buffer.from(JSON.stringify(jsonRecord) + 'n', 'utf8').toString('base64') return { recordId: record.recordId, result: 'Ok', data: dataBuffer } }) console.log(`Output records: ${output.length}`) return { records: output }
}
Comparing scaling and throughput with Kinesis Data Streams and Kinesis Data Firehose
Kinesis Data Firehose manages scaling automatically. If the data source is a Kinesis Data Stream, there is no limit to the amount of data the service can ingest. If the data source is a direct put using the PutRecordBatch API, there are soft limits of up to 500,000 records per second, depending upon the Region. See the Kinesis Data Firehose quota page for more information.
Kinesis Data Firehose invokes a Lambda transformation function synchronously and scales up the function as the number of records in the stream grows. When the destination is S3, Amazon Redshift, or the Amazon Elasticsearch Service, Kinesis Data Firehose allows up to five outstanding Lambda invocations per shard. When the destination is Splunk, the quota is 10 outstanding Lambda invocations per shard.
With Kinesis Data Firehose, the buffer hints are the main controls for influencing the rate of data delivery. You can decide between more frequent delivery of small batches of message or less frequent delivery of larger batches. This can impact the PUT cost when using a destination like S3. However, this service is not intended to provide real-time data delivery due to the transformation and batching processes.
With Kinesis Data Streams, the number of shards in a stream determines the ingestion capacity. Each shard supports ingesting up to 1,000 messages or 1 MB per second of data. Unlike Kinesis Data Firehose, this service does not allow you to transform records before delivery to a consumer.
Data Streams has additional capabilities for increasing throughput and reducing the latency of data delivery. The service invokes Lambda consumers every second with a configurable batch size of messages. If the consumers are falling behind data production in the stream, you can increase the parallelization factor. By default, this is set to 1, meaning that each shard has a single instance of a Lambda function it invokes. You can increase this up to 10 so that multiple instances of the consumer function process additional batches of messages.
Data Streams consumers use a pull model over HTTP to fetch batches of records, operating in serial. A stream with five standard consumers averages 200 ms of latency each, taking up to 1 second in total. You can improve the overall latency by using enhanced fan-out (EFO). EFO consumers use a push model over HTTP/2 and are independent of each other.
With EFO, all five consumers in the previous example receive batches of messages in parallel using dedicated throughput. The overall latency averages 70 ms and typically data delivery speed is improved by up to 65%. Note that there is an additional charge for this feature.
Conclusion
This blog post compares different AWS services for handling streaming data. I compare the features of SQS and Kinesis Data Streams, showing how ordering, ingestion throughput, and multiple consumers often make Kinesis the better choice for streaming workloads.
I compare Data Streams and Kinesis Data Firehose and show how Kinesis Data Firehose is the better option for many data loading operations. I show how the data transformation process works and the overall workflow of a Kinesis Data Firehose stream. Finally, I compare the scaling and throughput options for these two services.
For more serverless learning resources, visit Serverless Land.