Vertiv and Nvidia are developing an 800-volt direct current (VDC) power platform for next-generation data centers, a move designed to address the escalating energy demands of artificial intelligence. Following a strategic alignment announced in May 2025, Vertiv has advanced the project from conceptual design to a mature engineering phase, aiming for a product release in the second half of 2026. This timeline strategically aligns with the planned 2027 rollout of Nvidia’s powerful Rubin Ultra AI platform.
The collaboration confronts a critical bottleneck in the growth of AI: the inadequacy of existing power infrastructure. For decades, data centers have relied on 48V or 54V DC systems, which are sufficient for racks consuming tens of kilowatts. However, the massive computational density of modern AI models is pushing rack power requirements toward 1 megawatt (MW) and beyond, a threshold that legacy systems cannot physically or efficiently support. The new 800 VDC architecture represents a fundamental redesign of power delivery, intended to create a scalable and efficient foundation for the megawatt-scale “AI factories” of the near future.
Escalating Power Demands of AI Infrastructure
The rapid evolution of AI processors is driving an exponential increase in data center energy consumption. Just a few years ago, an average server rack consumed around 8 kW. With the advent of powerful GPUs for AI, that figure has soared. Racks using Nvidia’s Hopper and Blackwell chips can require 40 kW to 150 kW. The next generation, known as Rubin, is projected to push rack densities from 200 kW to over 1,000 kW, or 1 MW. This tenfold increase in power density has exposed the physical limits of traditional 54 VDC power distribution.
The core limitation is rooted in the relationship between power, voltage, and current. At low voltages like 48V, delivering immense power requires extremely high electrical current. For example, a 400 kW cabinet would need over 8,300 amperes. This massive current necessitates thick, heavy, and expensive copper busbars, which create significant space and weight challenges within racks. Furthermore, energy loss to heat is proportional to the square of the current, meaning these high-amperage systems waste substantial energy and demand more complex cooling solutions, further eroding efficiency.
A New Architecture for High-Density Computing
By increasing the voltage to 800 VDC, the new architecture drastically reduces the current needed to deliver the same amount of power. For the same 400 kW cabinet, the current drops to just 500 amperes, a 94% reduction. This fundamental change offers multiple cascading benefits. The lower current allows for thinner and lighter conductors, reducing copper requirements by as much as 45% and freeing up valuable space within the data center.
Efficiency and Simplified Design
The high-voltage approach significantly boosts energy efficiency. By minimizing the current, the platform drastically cuts resistive heat losses. This leads to an estimated 5% improvement in end-to-end power efficiency compared to legacy systems. The architecture also simplifies the power chain by reducing the number of voltage conversions between the electrical grid and the server chips. Fewer conversion steps mean fewer points of failure and less wasted energy. This streamlined design can reduce maintenance costs associated with power supply unit failures by up to 70%.
Technical and Strategic Collaboration
The joint effort leverages the strengths of both companies. Vertiv brings extensive experience in power systems, including a long history of developing DC power infrastructure for the telecom and industrial sectors. Nvidia provides the roadmap for future AI computing, defining the power requirements for next-generation platforms. Dion Harris, Nvidia’s Senior Director of HPC, Cloud and AI Infrastructure, described the initiative as a necessary shift in how data center power is conceptualized to unlock the full potential of AI.
Scott Armul, Executive Vice President at Vertiv, stated that the scale of AI is reshaping every aspect of data center design. He explained that Vertiv’s expertise in both AC and DC architectures positions the company to address the unique power demands of AI workloads. The goal is to engineer a holistic, scalable system where all infrastructure components interoperate seamlessly, moving from a conceptual vision to a state of readiness for deployment.
From Concept to Gigawatt-Scale Application
Vertiv is moving swiftly to implement the new standard. The company is developing a comprehensive 800 VDC platform that includes centralized rectifiers, high-efficiency DC busways for power distribution, and rack-level DC-DC converters to supply power to the IT equipment. The entire ecosystem is being engineered for the megawatt-scale capacity required by advanced computing environments.
While the full product portfolio is slated for a 2026 launch, the technology is already being applied. Vertiv is using its 800 VDC reference architecture in the early design stages of several large-scale AI factory projects. These real-world applications are testing the platform against gigawatt-scale requirements, demonstrating its scalability and resilience well ahead of its formal market debut.
Ensuring Reliability and Service Readiness
The transition to a higher-voltage environment inside data centers necessitates a focus on safety and reliability. Vertiv is extending its platform-level engineering to its service model to ensure that future AI data centers can be deployed and maintained safely at scale. The company plans to utilize its global network of over 4,000 field engineers to support the new generation of high-voltage DC data centers.
This initiative is part of a broader industry trend among hyperscale providers and their partners to develop high-voltage DC architectures to meet the power crunch driven by AI. As the global race for AI leadership intensifies, the ability to rapidly deploy massive computing power is constrained not by processor availability, but by access to electricity. Innovations in power delivery, such as the 800 VDC platform, are becoming critical enablers for the continued expansion of artificial intelligence.