You can now use sort
and z-order
compaction to improve Apache Iceberg query performance in Amazon S3 Tables and general purpose S3 buckets.
You typically use Iceberg to manage large-scale analytical datasets in Amazon Simple Storage Service (Amazon S3) with AWS Glue Data Catalog or with S3 Tables. Iceberg tables support use cases such as concurrent streaming and batch ingestion, schema evolution, and time travel. When working with high-ingest or frequently updated datasets, data lakes can accumulate many small files that impact the cost and performance of your queries. You’ve shared that optimizing Iceberg data layout is operationally complex and often requires developing and maintaining custom pipelines. Although the default binpack
strategy with managed compaction provides notable performance improvements, introducing sort
and z-order
compaction options for both S3 and S3 Tables delivers even greater gains for queries filtering across one or more dimensions.
Two new compaction strategies: Sort
and z-order
To help organize your data more efficiently, Amazon S3 now supports two new compaction strategies: sort
and z-order
, in addition to the default binpack
compaction. These advanced strategies are available for both fully managed S3 Tables and Iceberg tables in general purpose S3 buckets through AWS Glue Data Catalog optimizations.
Sort
compaction organizes files based on a user-defined column order. When your tables have a defined sort order, S3 Tables compaction will now use it to cluster similar values together during the compaction process. This improves the efficiency of query execution by reducing the number of files scanned. For example, if your table is organized by sort
compaction along state
and zip_code
, queries that filter on those columns will scan fewer files, improving latency and reducing query engine cost.
Z-order
compaction goes a step further by enabling efficient file pruning across multiple dimensions. It interleaves the binary representation of values from multiple columns into a single scalar that can be sorted, making this strategy particularly useful for spatial or multidimensional queries. For example, if your workloads include queries that simultaneously filter by pickup_location
, dropoff_location
, and fare_amount
, z-order
compaction can reduce the total number of files scanned compared to traditional sort-based layouts.
S3 Tables use your Iceberg table metadata to determine the current sort order. If a table has a defined sort order, no additional configuration is needed to activate sort
compaction—it’s automatically applied during ongoing maintenance. To use z-order
, you need to update the table maintenance configuration using the S3 Tables API and set the strategy to z-order
. For Iceberg tables in general purpose S3 buckets, you can configure AWS Glue Data Catalog to use sort
or z-order
compaction during optimization by updating the compaction settings.
Only new data written after enabling sort
or z-order
will be affected. Existing compacted files will remain unchanged unless you explicitly rewrite them by increasing the target file size in table maintenance settings or rewriting data using standard Iceberg tools. This behavior is designed to give you control over when and how much data is reorganized, balancing cost and performance.
Let’s see it in action
I’ll walk you through a simplified example using Apache Spark and the AWS Command Line Interface (AWS CLI). I have a Spark cluster installed and an S3 table bucket. I have a table named testtable
in a testnamespace
. I temporarily disabled compaction, the time for me to add data into the table.
After adding data, I check the file structure of the table.
spark.sql("""
SELECT
substring_index(file_path, '/', -1) as file_name,
record_count,
file_size_in_bytes,
CAST(UNHEX(hex(lower_bounds[2])) AS STRING) as lower_bound_name,
CAST(UNHEX(hex(upper_bounds[2])) AS STRING) as upper_bound_name
FROM ice_catalog.testnamespace.testtable.files
ORDER BY file_name
""").show(20, false)
+--------------------------------------------------------------+------------+------------------+----------------+----------------+
|file_name |record_count|file_size_in_bytes|lower_bound_name|upper_bound_name|
+--------------------------------------------------------------+------------+------------------+----------------+----------------+
|00000-0-66a9c843-5a5c-407f-8da4-4da91c7f6ae2-0-00001.parquet |1 |837 |Quinn |Quinn |
|00000-1-b7fa2021-7f75-4aaf-9a24-9bdbb5dc08c9-0-00001.parquet |1 |824 |Tom |Tom |
|00000-10-00a96923-a8f4-41ba-a683-576490518561-0-00001.parquet |1 |838 |Ilene |Ilene |
|00000-104-2db9509d-245c-44d6-9055-8e97d4e44b01-0-00001.parquet|1000000 |4031668 |Anjali |Tom |
|00000-11-27f76097-28b2-42bc-b746-4359df83d8a1-0-00001.parquet |1 |838 |Henry |Henry |
|00000-114-6ff661ca-ba93-4238-8eab-7c5259c9ca08-0-00001.parquet|1000000 |4031788 |Anjali |Tom |
|00000-12-fd6798c0-9b5b-424f-af70-11775bf2a452-0-00001.parquet |1 |852 |Georgie |Georgie |
|00000-124-76090ac6-ae6b-4f4e-9284-b8a09f849360-0-00001.parquet|1000000 |4031740 |Anjali |Tom |
|00000-13-cb0dd5d0-4e28-47f5-9cc3-b8d2a71f5292-0-00001.parquet |1 |845 |Olivia |Olivia |
|00000-134-bf6ea649-7a0b-4833-8448-60faa5ebfdcd-0-00001.parquet|1000000 |4031718 |Anjali |Tom |
|00000-14-c7a02039-fc93-42e3-87b4-2dd5676d5b09-0-00001.parquet |1 |838 |Sarah |Sarah |
|00000-144-9b6d00c0-d4cf-4835-8286-ebfe2401e47a-0-00001.parquet|1000000 |4031663 |Anjali |Tom |
|00000-15-8138298d-923b-44f7-9bd6-90d9c0e9e4ed-0-00001.parquet |1 |831 |Brad |Brad |
|00000-155-9dea2d4f-fc98-418d-a504-6226eb0a5135-0-00001.parquet|1000000 |4031676 |Anjali |Tom |
|00000-16-ed37cf2d-4306-4036-98de-727c1fe4e0f9-0-00001.parquet |1 |830 |Brad |Brad |
|00000-166-b67929dc-f9c1-4579-b955-0d6ef6c604b2-0-00001.parquet|1000000 |4031729 |Anjali |Tom |
|00000-17-1011820e-ee25-4f7a-bd73-2843fb1c3150-0-00001.parquet |1 |830 |Noah |Noah |
|00000-177-14a9db71-56bb-4325-93b6-737136f5118d-0-00001.parquet|1000000 |4031778 |Anjali |Tom |
|00000-18-89cbb849-876a-441a-9ab0-8535b05cd222-0-00001.parquet |1 |838 |David |David |
|00000-188-6dc3dcca-ddc0-405e-aa0f-7de8637f993b-0-00001.parquet|1000000 |4031727 |Anjali |Tom |
+--------------------------------------------------------------+------------+------------------+----------------+----------------+
only showing top 20 rows
I observe the table is made of multiple small files and that the upper and lower bounds for the new files have overlap–the data is certainly unsorted.
I set the table sort order.
spark.sql("ALTER TABLE ice_catalog.testnamespace.testtable WRITE ORDERED BY name ASC")
I enable table compaction (it’s enabled by default; I disabled it at the start of this demo)
aws s3tables put-table-maintenance-configuration --table-bucket-arn ${S3TABLE_BUCKET_ARN} --namespace testnamespace --name testtable --type icebergCompaction --value "status=enabled,settings={icebergCompaction={strategy=sort}}"
Then, I wait for the next compaction job to trigger. These run throughout the day, when there are enough small files. I can check the compaction status with the following command.
aws s3tables get-table-maintenance-job-status --table-bucket-arn ${S3TABLE_BUCKET_ARN} --namespace testnamespace --name testtable
When the compaction is done, I inspect the files that make up my table one more time. I see that the data was compacted to two files, and the upper and lower bounds show that the data was sorted across these two files.
spark.sql("""
SELECT
substring_index(file_path, '/', -1) as file_name,
record_count,
file_size_in_bytes,
CAST(UNHEX(hex(lower_bounds[2])) AS STRING) as lower_bound_name,
CAST(UNHEX(hex(upper_bounds[2])) AS STRING) as upper_bound_name
FROM ice_catalog.testnamespace.testtable.files
ORDER BY file_name
""").show(20, false)
+------------------------------------------------------------+------------+------------------+----------------+----------------+
|file_name |record_count|file_size_in_bytes|lower_bound_name|upper_bound_name|
+------------------------------------------------------------+------------+------------------+----------------+----------------+
|00000-4-51c7a4a8-194b-45c5-a815-a8c0e16e2115-0-00001.parquet|13195713 |50034921 |Anjali |Kelly |
|00001-5-51c7a4a8-194b-45c5-a815-a8c0e16e2115-0-00001.parquet|10804307 |40964156 |Liza |Tom |
+------------------------------------------------------------+------------+------------------+----------------+----------------+
There are fewer files, they have larger sizes, and there is a better clustering across the specified sort column.
To use z-order
, I follow the same steps, but I set strategy=z-order
in the maintenance configuration.
Regional availability
Sort
and z-order
compaction are now available in all AWS Regions where Amazon S3 Tables are supported and for general purpose S3 buckets where optimization with AWS Glue Data Catalog is available. There is no additional charge for S3 Tables beyond existing usage and maintenance fees. For Data Catalog optimizations, compute charges apply during compaction.
With these changes, queries that filter on the sort
or z-order
columns benefit from faster scan times and reduced engine costs. In my experience, depending on my data layout and query patterns, I observed performance improvements of threefold or more when switching from binpack
to sort
or z-order
. Tell us how much your gains are on your actual data.
To learn more, visit the Amazon S3 Tables product page or review the S3 Tables maintenance documentation. You can also start testing the new strategies on your own tables today using the S3 Tables API or AWS Glue optimizations.