Q&A with MD of Data Centres at Northern Data Group
The landscape of High-Performance Computing (HPC) is undergoing a profound transformation, driven by the explosive growth of AI.
Data centres are experiencing an unprecedented surge in power demands, with AI technologies pushing the boundaries of traditional infrastructure. The computational revolution is no longer just about processing speed, but about managing the immense electrical requirements that come with cutting-edge AI workloads.
At the heart of this challenge lies a stark reality: AI is fundamentally reshaping data centre energy consumption. Goldman Sachs Research estimates that AI could increase data centre power consumption by approximately 200 terawatt-hours per year between 2023 and 2030. The energy intensity is remarkable - a single ChatGPT query consumes 2.9 watt-hours of electricity, compared to just 0.3 watt-hours for a standard Google search. This exponential increase is not merely a technical nuance, but a critical inflection point for the entire technological ecosystem.
Data centres are therefore facing a complex transformation, where power management has become the central strategic consideration. Gary Tinkler, MD of Data Centres at Northern Data Group, however, explains how Northern Data Group is pioneering strategies to meet this new challenge of AI adding pressures on data centres.
What is the current challenge facing data centres in the context of HPC?
When we talk about HPC, the fusion of AI and computational power is driving incredible innovations. In the past, we focused mainly on cooling solutions to keep systems running smoothly.
But now, with AI-driven HPC systems requiring so much more power, the real challenge isn't just about keeping hardware cool; it's about managing an enormous demand for electricity. This pivotal shift in the industry is telling us something important: it’s no longer a cooling problem—it’s a power problem.
What are the current power requirements for data centres, particularly regarding AI?
As an example, let’s take a closer look at NVIDIA, a giant in the HPC world. They’ve created popular air- cooled systems that have served us well. However, as AI models get more complex, the power requirements are skyrocketing.
Reports show that AI training tasks use 10-15 times more power than traditional data centres were designed to handle. Facilities that once operated at 5-8kW per rack are quickly becoming outdated. Recently, NVIDIA announced a major rollout of new GPUs, highlighting the urgent need for advanced technology to meet these growing power demands.
How are data centre operators responding to increasing power demands?
Data centre operators are now reevaluating their power strategies because their existing setups can’t keep up. For example, a facility that used to work well with 8kW per rack now finds that this just isn’t enough anymore.
As AI continues to advance, we’re looking at power needs soaring to between 50- 80kW per rack. This isn’t just a small tweak; it’s a major change in how data centres need to be designed.
What steps can data centres take to manage the transition to higher power demands?
One of the biggest challenges in this transition is updating power supply systems. Traditional Power Distribution Units (PDUs) aren’t built to handle the demands of these new AI-driven systems. To meet the required power levels, data centres can invest in more advanced PDUs that can manage heavier loads while boosting overall efficiency.
For many setups today, that means installing six units that can each supply 63 amps of power. This shift not only changes how data centres are built but also adds complexity to how everything is arranged inside the racks.
What innovative solutions are being implemented in data centres?
Of course, as facilities rush to meet these new power needs, we’re seeing innovative solutions come to light. Ultrascale Digital Infrastructure has partnered with Cargill for example so that its data centres can run on 99% plant-based fluids, eliminating the need for billions of gallons of water used annually in cooling, offering new opportunities for water conservation, particularly for data centres designed to rely on water in their operations.
How are data centre designs evolving to meet increasing power demands?
As power demands rise, the standard 1200mm deep racks are becoming outdated. To meet this increase we’re likely to see a shift to 1400mm deep racks. This isn’t just about making things bigger; it’s about maximising flexibility and capacity. Recent reports indicate that wider rack options—ranging from 800mm to 1000mm—are becoming more popular, providing standardised 52 Rack Units (RU) that help facilities scale more effectively.
This change in rack design is crucial because it directly affects how data centres can support the evolving demands of AI and HPC. By optimising the size of racks, facilities can improve airflow, streamline power distribution, and ultimately boost operational efficiency.
What challenges do data centres face regarding “stranded space”?
As facilities designed for traditional workloads try to adapt to new HPC infrastructure, they often find themselves with wasted space. Older data centres weren’t built to handle the density and power needs of modern AI workloads. Even those with upgraded setups, like indirect cooling solutions that can support 30kW per rack, are now proving inadequate as requests now frequently exceed 60kW. Facilities operators are rethinking not just their cooling methods but also how to make the best use of their available space while preparing for increasing power demands.
Traditional data centres were built with certain assumptions about power needs—typically around 5- 8kW per rack. This led to innovations like aisle containment, designed to improve cooling in response to growing demands. However, as AI keeps pushing the limits, these outdated assumptions are no longer enough. HPC deployments now require facilities that can handle power outputs of up to 80kW per rack or even more.
What does the future hold for data centres in the context of AI and HPC?
We’re beginning to see a new wave of advanced data centres emerge that look very different - facilities designed from the ground up to meet these heightened demands and that can handle diverse power requirements while ensuring flexibility for future growth.
What is the overarching challenge for the HPC industry?
As AI continues to reshape what’s possible in HPC, the industry is faced with a significant challenge at its core: the power problem. The traditional focus on cooling just isn’t enough anymore. With exciting new technologies being developed at a faster pace than ever, attention is shifting to building a robust power infrastructure that can support this new frontier.
Data centres that evolve in their design, layout, and operational strategies to turn this power challenge from a roadblock into an opportunity, can unlock the full potential of AI in high-performance computing. The future of HPC looks bright, but it all depends on our ability to adapt to these new demands.
Explore the latest edition of AI Magazine and be part of the conversation at our global conference series, Tech & AI LIVE.
Discover all our upcoming events and secure your tickets today.
AI Magazine is a BizClik brand