OpenAI is collaborating with semiconductor manufacturer Broadcom to design and deploy a massive new network of custom artificial intelligence accelerators. The partnership aims to build out 10 gigawatts of computing infrastructure across a global footprint of data centers, a significant move by the AI research and deployment company to create its own specialized hardware.
This strategic initiative represents a pivotal step for OpenAI, which has historically relied on third-party hardware to power its widely used models like ChatGPT. By bringing chip design in-house, the company intends to embed its deep understanding of large language models directly into silicon. This co-development effort with Broadcom will manage the production and integration of these custom chips, which are expected to enhance performance and meet the growing computational demands of OpenAI’s user base, now serving over 800 million people weekly.
A New Era of Custom Silicon
The decision to create bespoke hardware marks a turning point for OpenAI. The company believes that by designing its own accelerators, it can achieve new levels of capability and intelligence that are not possible with off-the-shelf components. The new chips are being designed to specifically handle the complex mathematical operations that form the foundation of modern AI models. OpenAI will lead the design process, leveraging its extensive experience in building and training frontier models.
Greg Brockman, President of OpenAI, stated that building their own chip allows the company to integrate lessons learned from creating products like GPT-4 directly into the hardware. This vertical integration of software and hardware is a strategy employed by other major technology firms seeking to optimize performance and control their technology stack. The partnership gives OpenAI greater influence over its hardware roadmap and reduces its dependency on traditional chip suppliers, a critical factor as the demand for AI processing power continues to surge globally.
The Scale and Timeline of Deployment
The 10 GW figure underscores the immense scale of this undertaking. To put this in perspective, one gigawatt of electricity can power approximately 700,000 homes in the United States. This substantial power budget will support a vast network of accelerators distributed across OpenAI’s own facilities and the data centers of its partners worldwide. The infrastructure is being built to handle the next generation of AI models and applications, which are expected to be significantly more complex and resource-intensive.
The project has a clear timeline for implementation. The first of these custom-designed systems are scheduled to come online in the second half of 2026. The full deployment of the 10 GW of computing power is expected to be completed by the end of 2029. This multi-year rollout reflects the complexity of not only manufacturing the chips but also integrating them into a globally distributed and highly interconnected network of data centers. The agreements for co-development and supply have already been signed, with a term sheet now in place to cover the deployment of these systems into production environments.
Technological and Strategic Implications
This partnership also carries significant weight in the ongoing debate over the best way to build the underlying infrastructure for large-scale AI. OpenAI and Broadcom have committed to using Ethernet networking to connect the racks of accelerators. This is a notable choice, as many high-performance computing clusters have historically favored alternatives like InfiniBand. The selection of Ethernet, a ubiquitous and standards-based technology, suggests a focus on scalability, cost-effectiveness, and interoperability.
Hock Tan, President and CEO of Broadcom, framed the deal as a validation of the Ethernet-based approach for next-generation AI infrastructure. Broadcom will provide an end-to-end portfolio of connectivity solutions, including Ethernet switches, PCIe components, and high-speed optical links to facilitate rapid data transfer between racks. This comprehensive suite of technologies is designed to create an open, scalable, and power-efficient AI cluster architecture. The collaboration is intended to set new industry benchmarks for the design and deployment of such systems.
Executive Perspectives on the Partnership
Leaders from both OpenAI and Broadcom have emphasized the strategic importance of this collaboration. Sam Altman, OpenAI’s Co-founder and CEO, described the partnership as a critical step toward building the infrastructure necessary to unlock the full potential of artificial intelligence. He highlighted that developing proprietary accelerators adds to a broader ecosystem of partners all working to expand the capacity required to advance the frontiers of AI for the benefit of humanity.
Broadcom’s Role and Vision
Charlie Kawwas, President of Broadcom’s Semiconductor Solutions Group, elaborated on the technical synergy between the two companies’ contributions. He explained that custom accelerators pair remarkably well with standards-based Ethernet networking to create AI infrastructure that is optimized for both cost and performance. Kawwas expressed confidence in the partnership, stating that it reaffirms Broadcom’s leadership in the AI infrastructure market. The company sees this co-development effort as a way to pave the future for AI by creating powerful and efficient systems at an unprecedented scale.