CES 2026 showcases the arrival of the NVIDIA Rubin platform, along with Azure’s proven readiness for deployment. Microsoft’s long-range datacenter strategy was engineered for moments exactly like this, where NVIDIA’s next-generation systems slot directly into infrastructure that has anticipated their power, thermal, memory, and networking requirements years ahead of the industry. Our long-term collaboration with NVIDIA ensures Rubin fits directly into Azure’s forward platform design.

Building with purpose for the future

Azure’s AI datacenters are engineered for the future of accelerated computing. That enables seamless integration of NVIDIA Vera Rubin NVL72 racks across Azure’s largest next-gen AI superfactories from current Fairwater sites in Wisconsin and Atlanta to future locations.

The newest NVIDIA AI infrastructure requires significant upgrades in power, cooling, and performance optimization; however, Azure’s experience with our Fairwater sites and multiple upgrade cycles over the years demonstrates an ability to flexibly enhance and expand AI infrastructure in step with advancements in technology.

Azure’s proven experience delivering scale and performance

Microsoft has years of market-proven experience in designing and deploying scalable AI infrastructure that evolves with every major advancement of AI technology. In lockstep with each successive generation of NVIDIA’s accelerated compute infrastructure, Microsoft rapidly integrates NVIDIA’s innovations and delivers them at scale. Our early, large-scale deployments of NVIDIA Ampere and Hopper GPUs, connected via NVIDIA Quantum-2 InfiniBand networking, were instrumental in bringing models like GPT-3.5 to life, while other clusters set supercomputing performance records, demonstrating we can bring next-generation systems online faster and with higher real-world performance than the rest of the industry.

We unveiled the first and largest implementations of both NVIDIA GB200 NVL72 and NVIDIA GB300 NVL72 platforms, architected as racks into single supercomputers which train AI models dramatically faster, helping Azure remain a top choice for customers seeking advanced AI capabilities.

Azure’s systems approach

Azure is engineered for compute, networking, storage, software, and infrastructure all working together as one integrated platform. This is how Microsoft builds a durable advantage into Azure and delivers cost and performance breakthroughs that compound over time.

Maximizing GPU utilization requires optimization across every layer. In addition to Azure being able to adopt NVIDIA’s new accelerated compute platforms early, Azure advantages come from the surrounding platform as well: high-throughput Blob storage, proximity placement and region-scale design shaped by real production patterns, and orchestration layers like CycleCloud and AKS tuned for low-overhead scheduling at massive cluster scale.

Azure Boost and other offload engines clear IO, network, and storage bottlenecks so models scale smoothly. Faster storage feeds larger clusters, stronger networking sustains them, and optimized orchestration keeps end-to-end performance steady. First party innovations reinforce the loop: liquid cooling Heat Exchanger Units maintain tight thermals, Azure hardware security module (HSM) silicon offloads security work, and Azure Cobalt delivers exceptional performance and efficiency for general-purpose compute and AI-adjacent tasks. Together, these integrations ensure the entire system scales efficiently, so GPU investments deliver maximum value.

This systems approach is what makes Azure ready for the Rubin platform. We are delivering new systems and establishing an end-to-end platform already shaped by the requirements Rubin brings.

Operating the NVIDIA Rubin platform

NVIDIA Vera Rubin Superchips will deliver 50 PF NVFP4 inference performance per chip and 3.6 EF NVFP4 per rack, a five times jump over NVIDIA GB200 NVL72 rack systems.

Azure has already incorporated the core architectural assumptions Rubin requires:

  • NVIDIA NVLink evolution: The sixth-generation NVIDIA NVLink fabric expected in Vera Rubin NVL72 systems reaches ~260 TB/s of scale-up bandwidth, and Azure’s rack architecture has already been redesigned to operate with those bandwidth and topology advantages.
  • High-performance scale-out networking: The Rubin AI infrastructure relies on ultra-fast NVIDIA ConnectX-9 1,600 Gb/s networking, delivered by Azure’s network infrastructure, which has been purpose-built to support large-scale AI workloads.
  • HBM4/HBM4e thermal and density planning: The Rubin memory stack demands tighter thermal windows and higher rack densities; Azure’s cooling, power envelopes, and rack geometries have already been upgraded to handle the same constraints.
  • SOCAMM2 driven memory expansion: Rubin Superchips use a new memory expansion architecture; Azure’s platform has already integrated and validated similar memory extension behaviors to keep models fed at scale.
  • Reticle sized GPU scaling and multi-die packaging: Rubin moves to massively larger GPU footprints and multi-die layouts. Azure’s supply chain, mechanical design, and orchestration layers have been pre-tuned for these physical and logical scaling characteristics.

Azure’s approach in designing for next generation accelerated compute platforms like Rubin has been proven over several years, including significant milestones:

  • Operated the world’s largest commercial InfiniBand deployments across multiple GPU generations.
  • Built reliability layers and congestion management techniques that unlock higher cluster utilization and larger job sizes than competitors, reflected in our ability to publish industry leading large-scale benchmarks. (E.g., multi-rack MLPerf runs competitors have never replicated.)
  • AI datacenters co-designed with Grace Blackwell and Vera Rubin from the ground up to maximize performance and performance per dollar at the cluster level.

Design principles that differentiate Azure

  • Pod exchange architecture: To enable fast servicing, Azure’s GPU server trays are designed to be quickly swappable without requiring extensive rewiring, improving uptime.
  • Cooling abstraction layer: Rubin’s multi-die, high bandwidth components require sophisticated thermal headroom that Fairwater already accommodates, avoiding expensive retrofit cycles.
  • Next gen power design: Vera Rubin NVL72 demand increasing watt density; Azure’s multi-year power redesign (liquid cooling loop revisions, CDU scaling, and high amp busways) ensures immediate deployability.
  • AI superfactory modularity: Microsoft, unlike other hyperscalers, builds regional supercomputers rather than singular megasites, enabling more predictable global rollout of new SKUs.

How co-design leads to user benefits

The NVIDIA Rubin platform marks a major step forward in accelerated computing, and Azure’s AI datacenters and superfactories are already engineered to take full advantage. Years of co-design with NVIDIA across interconnects, memory systems, thermals, packaging, and rack scale architecture means Rubin integrates directly into Azure’s platform without rework. Rubin’s core assumptions are already reflected in our networking, power, cooling, orchestration, and pod exchange design principles. This alignment gives customers immediate benefits with faster deployment, faster scaling, and faster impact as they build the next era of large-scale AI.

The post Microsoft’s strategic AI datacenter planning enables seamless, large-scale NVIDIA Rubin deployments appeared first on Microsoft Azure Blog.