NVIDIA’s announcements at GTC 2025 — including the launch of the Blackwell platform and new AI inference tools — mark a major inflection point for digital infrastructure. For hyperscalers, colocation providers, and large-scale public and private sector organisations, the message is clear: AI workloads are outpacing the capacity of traditional data centres.
With Blackwell racks demanding up to 140kW per rack, and infrastructure ideally designed for 150kW, liquid cooling and high-density power delivery are now essential. Power, thermal design, and deployment agility are no longer optional — they’re mission-critical. The organisations that move fastest will be best placed to lead in the AI era.
More Power, More Cooling
NVIDIA’s new Blackwell-based racks represent a major leap in power demand. A standard DGX GB200 NVL72 rack — housing 72 GPUs (36 Grace-Blackwell Superchips) — draws 120–140kW per rack, depending on workload intensity and cooling efficiency. That’s 3–6x more power than previous AI racks (A100/H100 at ~25–40kW) and up to 10x more than traditional CPU/GPU racks (5–15kW).
You should design infrastructure to support up to 150kW per rack to future-proof for peak density workloads.
Faster AI, Lower Cost per Model
With the introduction of NVIDIA Inference Microservices (NIMs), deploying AI models is now significantly faster and more cost-effective. NIMs are pre-packaged, containerised microservices that deliver high-performance inference on leading foundation models via standard APIs. However, unlocking their full potential requires infrastructure optimised for high-density compute, low latency, and scalable GPU performance.
Liquid Cooling Becomes Essential
The heat generated by Blackwell-class GPUs makes liquid cooling a necessity. Cooling solutions such as direct-to-chip and rear-door heat exchangers are now the gold standard for sustainable, high-performance AI infrastructure.
Hyperscale Cloud Providers
Next-generation AI services are arriving quickly — but to deploy them effectively, hyperscalers must prioritise infrastructure that supports high-density workloads. Efficiency, scalability, and sustainability will directly impact service availability and economics.
Colocation Providers
Colocation operators must evolve fast. Facilities that offer AI-ready environments — including higher rack densities, liquid cooling capabilities, and flexible power configurations — will become the preferred choice for AI deployments. Commercial flexibility and sustainability credentials will further differentiate leaders.
Organisations
Organisations looking to scale AI must assess whether their on-premises environments or current data centre providers can support the performance, density, and sustainability demands of modern AI workloads. The question is no longer if — but how fast your infrastructure can scale.
The demand for AI-ready data centres is surging. As workloads continue to grow in complexity and consumption, the gap between AI-capable and AI-constrained infrastructure will only widen.
Those who delay investment risk becoming capacity-constrained, cost-exposed, and competitively disadvantaged.
NVIDIA’s GTC 2024 wasn’t just a technology showcase — it was a wake-up call. The AI age is here, and it’s redefining the limits of infrastructure performance and sustainability.
If you're reassessing your power, cooling, or deployment strategy — or looking to align your infrastructure roadmap with the AI leaders — NEXTDC is here to help.
Contact the NEXTDC team to unpack the latest from NVIDIA GTC 2025 and explore how our high-density, liquid-cooled data centre solutions can power your AI ambitions.
As a NVIDIA DGX-Certified Provider, we deliver high-density, liquid-cooled, AI-ready environments that help you move faster, scale smarter, and stay ahead.
Why NEXTDC? Because your AI future deserves a foundation built for what’s next.