It is no secret that rack power densities are increasing inside the data center, and the rollout of artificial intelligence (AI) workloads are only going to require a greater amount of power.
This was a major topic of discussion at this year’s Data Center Thought Leadership Summit (TLS), as representatives from the data center and AI industries came together to reflect on what it’s going to take to successfully rollout AI applications in today’s data centers.
As one TLS panelist – Jim Julson of CoreWeave – explained, “If you had asked [me] two years ago about 40 kW racks, I would have said that’s insane. Now…there’s a 110 kW rack. It’s moving faster than the people deploying it can keep up with or react to.”
If the current discourse around the extraordinary power requirements that may accompany AI workloads proves to be true, data centers will not just have a problem with power availability; they will be faced with tackling a power distribution challenge as well.
In my opinion, we have not yet arrived at the point where fully baked AI applications are being deployed en masse. From my vantage point, a lot of the AI understanding is still in a rather developmental stage. Even so, I’d absolutely concede that AI’s evolution is moving at breakneck speed and a major segment of workloads will leverage it for increased productivity. The long term success of those workloads will rely on a race to solve evolving power generation and distribution challenges.
More power is needed
For an AI roll out to be successful, power companies and utilities must figure out a way to produce more electricity. Right now, power plants are forecasting major issues in being able to support and meet the combined power requirements of both data centers and the local communities they are meant to serve.
This challenge can best be solved by leveraging nuclear power, whether that power is generated by the power companies, or through small modular reactors deployed by data centers or smaller power co-operatives. Efficiency is gained both in a financial sense to provide users with the most “bang for their buck” as well as providing capacity back to local energy grids.
Much like the discussions around cooling and power generation, the data center industry should be asking if there is another medium by which to transmit power.
Furthermore, nuclear is by far the safest method of energy production, especially when compared to the long-term impacts of most current power generation methods. Coal-fired and natural gas turbines have much more negative long-term repercussions on the environment and the human population than nuclear power.
Nuclear power does have a couple of drawbacks, namely an outdated negative perception among the general public and regulatory roadblocks that stand in the way of its expansion. When the general public hears about nuclear energy being generated in their backyards, it can become extremely difficult for them to divorce the notion of nuclear energy from weapons technology – which are two very different things.
But to be clear, if data center migration to AI continues and power becomes a constraining factor, it’s just one energy roadblock that data centers will face.
Even if the local energy grid and power utilities are generating enough energy to drive incredibly power-hungry racks, there’s still the issue of getting power into the envelope. This raises the second issue requiring a solution. Again, innovation and investment will come to the forefront.
The power distribution problem
The industry has made strides in exploring different methods of cooling to meet the requirements of AI data centers. Traditional air, glycol, and more advanced methods of dielectric liquid immersion cooling should come to mind.
However, I have only come across scant discussions around power distribution, a critical component to enduring success of AI data centers. That is: even if we solve the power generation problem, current cabling infrastructure within today’s data centers may not be able to accommodate a proportional increase of power flowing throughout the facility.
In a nutshell, more power necessitates bigger wires. Unfortunately, to bring on these larger workloads, bigger wires will increase the amount of cabling that we’re bringing into the data center. Eventually data centers will hit a point where so much cable will be coming in, that space will become a limiting factor. Some are even predicting that it could become untenable for cabling racks to support their weight.
I believe the data center industry could solve these distribution issues by investigating different materials or grid parameters that could get more power through the same size cable. Today’s data centers are still working with aluminum and copper wires. How do we get more power going through those wires? Do we transition to platinum conductors, perhaps? Higher voltage? Higher frequency? A resurgence of fused disconnects and the old becomes new again?
Right now, power plants are forecasting major issues in being able to support and meet the combined power requirements of both data centers and the local communities they are meant to serve.
These ideas are already being used to a certain extent, as we have witnessed different materials being deployed at the integrated circuit and PCB level. On larger scales, however, there’s a huge economic question to be asked: is it economically feasible to run platinum wiring? This is likely untenable for most data center operations, but that route could potentially be viable for larger FAANG-flavored companies.
Much like the discussions around cooling and power generation, the data center industry should be asking if there is another medium by which to transmit power. We have seen transmission via microwave. We’ve seen transmission over other earth elements like platinum, gold, and silver, which have high conductive properties. In the realm of breakers, going from your traditional copper-mated contacts to a silicon carbide solid-state breaker would be an example of power distribution innovation.
Simply put, there are other distribution methods industry should be investigating.
Running two races at once
Tomorrow’s data centers have more than just a power generation problem. They also face a challenge around power distribution. Innovation is the key to both of these problems – putting the data center industry in a race against increasing rack power densities to innovate new power solutions.
Technological, financial, and regulatory models must be challenged. Revised ideas on all these fronts must be explored. An entire industry mustn’t be shackled by outdated models and understanding on these fronts.
The power issues present two critical races that the data center industry must run in tandem if we want to successfully roll out next-generation AI workloads. As the industry figures out solutions to these roadblocks, we will be paving the path for data centers to successfully operate with higher power densities when AI applications are ready to be fully realized.
Terry Davis is the founder of Davis Infrastructure, a partner of Compu Dynamics.