Wow, KubeCon + CloudNativeCon Europe 2025 in London was a fantastic time! Our Azure Storage team was thrilled to exchange insights and success stories at this vibrant community event. If you missed it, don’t worry! We are here to share what we showed off at the con: how we are enhancing performance, cost-efficiency, and AI capabilities for your workloads on Azure.

Optimize your open-source databases with Azure Disks

Open-source databases such as PostgreSQL, MariaDB, and MySQL are among the most commonly deployed stateful workloads on Kubernetes. For scenarios that demand extremely low latency and high input/output operations per second (IOPS)—such as running these databases for transactional workloads—Azure Container Storage enables you to tap into local ephemeral non-volative memory express (NVMe) drives within your node pool. This provides sub-millisecond latency and up to half a million IOPS, making it a strong fit for performance-critical use cases. In our upcoming v1.3.0 update, we have made significant optimizations specifically for databases.

Compared to the previous v1.2.0 version, you can expect up to a 5 times increase in transactions per second (TPS) for PostgreSQL and MySQL deployments. If you’re looking for the best balance of durability, performance, and cost for storage, Premium SSD v2 disks remains our recommended default for database workloads. Premium SSD v2 offers a flexible pricing model that charges per gigabyte and includes generous baseline IOPS and throughput for free out of the box. You can dynamically scale IOPS and throughput as needed, allowing you to fine-tune performance while optimizing for price-efficiency.

At KubeCon, we demonstrated how developers could readily take advantage of local NVMe and Premium SSD v2 Disks to build their highly available and performant PostgreSQL deployments. If you want to follow along yourself, check out the newly republished PostgreSQL on AKS documentation below!

Accelerate your AI workflows with Azure Blob Storage

Building an AI workflow demands scalable storage to host massive amounts of data—whether it’s raw sensor logs, high-resolution images, or multi-terabyte model checkpoints. Azure Blob Storage and BlobFuse2 with the Container Storage Interface (CSI) driver provide a seamless way to store and retrieve this data at scale. With BlobFuse2, you access blob storage just as a persistent volume, treating it like a local file system. With the latest version of Blobfuse2 – 2.4.1, you can:

  • Speed up model training and inference: BlobFuse2’s enhanced streaming support reduces latency for initial and repeated reads. Using BlobFuse2 to load large datasets or fine-tuned model weights directly from blob storage into local NVMe drives on GPU SKUs effectively optimizes the efficiency of your AI workflow.
  • Simplify data preprocessing: AI workflows often require frequent transformations—such as normalizing images or tokenizing text. By using BlobFuse2’s file-based access, data scientists can preprocess and store results directly in blob storage, keeping pipelines efficient.
  • Ensure data integrity at scale: When handling petabytes of streaming data, integrity checks matter. BlobFuse2 now includes improved CRC64 validation for data stored on local disk, ensuring reliable reads and writes, even when working with distributed AI clusters.
  • Parallel access of massive datasets: Implemented parallel downloads and uploads to significantly decrease the time required for accessing large datasets stored in blobs. This enhancement allows for faster data processing and increased efficiency, ensuring optimal utilization of GPU resources and improving training efficiency.

Scale your stateful workloads with Azure Files

Continuous Integration and Continuous Delivery/Deployment (CI/CD) pipelines, one of the popular stateful workloads, need shared persistent volumes to host artifacts in the repository, where Azure Premium Files is the storage of choice on Azure. These artifacts are stored in small files that incur heavy metadata operations on a file share. To speed up CI/CD workflows, the Azure Files team recently announced the general availability of metadata caching for premium SMB file shares. This new capability reduces metadata latency by up to 50 percent, benefiting metadata-intensive workloads that typically host many small files in a single share. At KubeCon, we showcased how metadata caching can accelerate repetitive build processes on GitHub, refer to the repo and try it out yourself.

For less performance-demanding stateful workloads, Standard Files with the new Provisioned v2 billing model offers better cost predictability and control as shared persistent volumes. The Provisioned v2 model shifts from usage-based to provisioned billing, allowing you to specify required storage, IOPS, and throughput with greater scale. You can now expand your file share from 32 GiB to 256 TiB, 50,000 IOPS, and 5 GiB/sec throughput as your applications demand.

KubeCon + CloudNativeCon was a wonderful opportunity to directly interact with developers and learn from our customers. As always, thanks to our customers and partners for contributing to the event’s value and significance, and we look forward to seeing you again in November for KubeCon North America!

The post Learn more about what’s new with Microsoft Azure Storage at KubeCon Europe 2025 appeared first on Microsoft Azure Blog.