Power. Cooling. AI Readiness: Why NVIDIA GTC 2025 Signals It’s Time to Rethink Your Data Centre Strategy

Mar 30, 2025

Share:

Executive Summary

NVIDIA’s announcements at GTC 2025 — including the launch of the Blackwell platform and new AI inference tools — mark a major inflection point for digital infrastructure. For hyperscalers, colocation providers, and large-scale public and private sector organisations, the message is clear: AI workloads are outpacing the capacity of traditional data centres.

With Blackwell racks demanding up to 140kW per rack, and infrastructure ideally designed for 150kW, liquid cooling and high-density power delivery are now essential. Power, thermal design, and deployment agility are no longer optional — they’re mission-critical. The organisations that move fastest will be best placed to lead in the AI era.


Key Takeaways from Nvidia's GTC 2024

More Power, More Cooling
NVIDIA’s new Blackwell-based racks represent a major leap in power demand. A standard DGX GB200 NVL72 rack — housing 72 GPUs (36 Grace-Blackwell Superchips) — draws 120–140kW per rack, depending on workload intensity and cooling efficiency. That’s 3–6x more power than previous AI racks (A100/H100 at ~25–40kW) and up to 10x more than traditional CPU/GPU racks (5–15kW).
You should design infrastructure to support up to 150kW per rack to future-proof for peak density workloads.

Faster AI, Lower Cost per Model
With the introduction of NVIDIA Inference Microservices (NIMs), deploying AI models is now significantly faster and more cost-effective. NIMs are pre-packaged, containerised microservices that deliver high-performance inference on leading foundation models via standard APIs. However, unlocking their full potential requires infrastructure optimised for high-density compute, low latency, and scalable GPU performance.

Liquid Cooling Becomes Essential
The heat generated by Blackwell-class GPUs makes liquid cooling a necessity. Cooling solutions such as direct-to-chip and rear-door heat exchangers are now the gold standard for sustainable, high-performance AI infrastructure.


Implications for Data Centre Strategy

Hyperscale Cloud Providers
Next-generation AI services are arriving quickly — but to deploy them effectively, hyperscalers must prioritise infrastructure that supports high-density workloads. Efficiency, scalability, and sustainability will directly impact service availability and economics.

Colocation Providers
Colocation operators must evolve fast. Facilities that offer AI-ready environments — including higher rack densities, liquid cooling capabilities, and flexible power configurations — will become the preferred choice for AI deployments. Commercial flexibility and sustainability credentials will further differentiate leaders.

Organisations
Organisations looking to scale AI must assess whether their on-premises environments or current data centre providers can support the performance, density, and sustainability demands of modern AI workloads. The question is no longer if — but how fast your infrastructure can scale.


Strategic Actions for Digital Infrastructure Leaders

  1. Evaluate Existing Power Infrastructure
    Conduct a detailed audit of your power infrastructure to determine whether it can support 120–140kW per rack, with the flexibility to scale to 150kW per rack for future readiness. Designs should accommodate this higher threshold while ensuring redundancy and fault tolerance.

  2. Evaluate Existing Cooling Infrastructure
    Assess your current cooling systems’ ability to manage the extreme thermal loads generated by Blackwell GPUs and future AI architectures. Identify any limitations and plan for enhancements that support liquid cooling standards.

  3. Plan for Advanced Cooling Solutions
    Integrate liquid cooling technologies — including direct-to-chip, immersion, and rear-door heat exchangers — into your infrastructure roadmap. Prioritise systems capable of scaling to 150kW per rack densities.

  4. Secure Scalable, High-Density Power
    Collaborate with utilities and energy partners to secure high-density power capacity, ensuring alignment with renewable energy sourcing and grid innovation to meet sustainability goals.

  5. Prioritise Sustainability & Efficiency
    With AI workloads consuming significantly more energy, focus on improving power usage effectiveness (PUE) and water usage effectiveness (WUE). Incorporate green energy procurement and next-generation cooling strategies to minimise environmental impact and operational cost.

  6. Analyse Total Cost of Ownership (TCO) Across Architectures
    Develop a robust TCO model that considers infrastructure upgrades, energy consumption, cooling, security, and long-term operations. Compare the economics of on-premises, colocation, and hybrid cloud deployments to determine the most cost-effective and scalable approach.

  7. Enhance Security Posture for High-Density Environments
    Strengthen physical and digital security protocols to address the evolving risks of high-density AI environments — including data protection, access control, and monitoring across all layers of the stack.

  8. Consider Strategic Partnerships Across the Spectrum
    Evaluate build vs. buy vs. partner strategies to accelerate time to market:
          1. Build (On-Premises): Assess the feasibility and cost of constructing new high-density data centres capable of supporting 150kW per rack.
          2. Buy (Colocation): Partner with colocation providers offering certified AI-ready infrastructure with high-density power and integrated cooling.
          3. Partner (Hybrid Cloud): Leverage public cloud scale for flexible workloads while maintaining control and compliance for sensitive operations on-premises or in colocation.

Looking Ahead

The demand for AI-ready data centres is surging. As workloads continue to grow in complexity and consumption, the gap between AI-capable and AI-constrained infrastructure will only widen.

Those who delay investment risk becoming capacity-constrained, cost-exposed, and competitively disadvantaged.


Conclusion

NVIDIA’s GTC 2024 wasn’t just a technology showcase — it was a wake-up call. The AI age is here, and it’s redefining the limits of infrastructure performance and sustainability.

If you're reassessing your power, cooling, or deployment strategy — or looking to align your infrastructure roadmap with the AI leaders — NEXTDC is here to help.


Want to learn more?

Contact the NEXTDC team to unpack the latest from NVIDIA GTC 2025 and explore how our high-density, liquid-cooled data centre solutions can power your AI ambitions.

As a NVIDIA DGX-Certified Provider, we deliver high-density, liquid-cooled, AI-ready environments that help you move faster, scale smarter, and stay ahead.

Why NEXTDC? Because your AI future deserves a foundation built for what’s next.

Similar posts