Fujitsu and Nvidia are expanding a strategic collaboration to co-develop full-stack artificial intelligence systems designed to accelerate AI-driven business and governmental transformation. The partnership aims to resolve persistent data-processing bottlenecks by deeply integrating Fujitsu’s next-generation ARM-based processors with Nvidia’s powerful graphics processing units, creating a more seamless and powerful hardware foundation for the demands of generative AI.
At the heart of the initiative is the plan to combine Fujitsu’s forthcoming “Monaka” series CPUs with Nvidia GPUs using a high-speed interconnect technology called NVLink Fusion. This approach moves beyond traditional system designs, which often struggle with bandwidth constraints between the CPU and GPU, creating performance issues during intensive AI workloads. By fostering a tighter, “silicon-level” integration, the companies intend to deliver platforms optimized for both training large-scale AI models and deploying them efficiently for inference tasks, starting with key sectors such as manufacturing, healthcare, and robotics.
A New Architecture for AI Acceleration
The core technical challenge in large-scale AI computing is the immense volume of data that must constantly move between processors. Traditional connections, like PCIe, can become a significant bottleneck, slowing down complex training and inference. The collaboration directly addresses this by using Nvidia’s NVLink Fusion, a chip-to-chip interconnect that provides a high-bandwidth, low-latency bridge between a CPU and a GPU. This enables the creation of a new class of integrated products where a custom CPU, like Fujitsu’s Monaka, can have a high-speed coherent connection with Nvidia’s GPUs, effectively replacing Nvidia’s own Grace CPUs in certain designs.
This “silicon-level optimization” is a key part of the partnership’s strategy. It involves ensuring that Fujitsu’s ARM-based processor architecture works in close concert with Nvidia’s CUDA programming environment, the ubiquitous platform for developing software for Nvidia GPUs. By opening up its NVLink ecosystem, Nvidia is enabling partners like Fujitsu to build semi-custom AI infrastructure tailored to specific performance and efficiency needs, fostering a modular approach to data center design. This allows hyperscalers and enterprises to adopt Nvidia’s rack-scale systems without being locked into a single vendor’s architecture.
The Fujitsu Monaka Processor
The CPU side of this partnership is Fujitsu’s next-generation Monaka processor, a high-core-count chip built on the Armv9 architecture. The Monaka is a formidable processor in its own right, featuring a 144-core design that leverages a chiplet-based system. Developed with Broadcom, it uses four 36-core compute chiplets manufactured on TSMC’s advanced 2nm process technology. These are stacked on top of large SRAM cache tiles built on a 5nm process, creating a dense and powerful package.
Notably, the Monaka processor is designed for a wide range of data center workloads and supports cutting-edge interfaces like PCIe 6.0 and CXL 3.0, which are crucial for connecting accelerators and other components. It also incorporates Arm’s Scalable Vector Extension 2 (SVE2) for high-performance computing tasks and the Armv9 Confidential Computing Architecture for enhanced security and workload isolation. Fujitsu aims for the Monaka to be twice as energy-efficient as competing x86 processors upon its scheduled release in fiscal year 2027, making it a powerful and efficient partner for Nvidia’s GPUs.
A Three-Pillar Strategic Framework
The collaboration is structured around three distinct pillars aimed at creating a comprehensive AI ecosystem from hardware to software and deployment.
Software and Platform Integration
The first pillar focuses on co-developing a platform for AI agents. This involves integrating Fujitsu’s “Kozuchi” AI platform with Nvidia Dynamo, a tool for orchestrating AI workloads. These agents will be built using Nvidia’s NeMo framework for custom language models and will incorporate Fujitsu’s own “Takane” AI model. The resulting applications will be packaged as Nvidia NIM microservices, a format designed to simplify and standardize the deployment of AI applications for enterprise customers.
Hardware Synergy and Optimization
The second pillar is the hardware integration itself, as described above. This involves the deep, silicon-level work to ensure Fujitsu’s Monaka CPUs and Nvidia’s GPUs communicate seamlessly via NVLink Fusion. This tight coupling is intended to maximize throughput and allow for the training and operation of larger and more complex AI models than what is possible with more conventional system architectures.
Fostering an Industrial Ecosystem
The third area involves building a robust partner ecosystem to drive adoption across various industries. The initial focus will be on developing use cases for industrial automation and robotics within Japanese industries, leveraging Fujitsu’s strong domestic presence. From there, the partners plan to expand to other global markets. Fujitsu will utilize its extensive network of data centers and cloud infrastructure to support the rollout of these new integrated systems.
Bolstering Japan’s Sovereign AI Goals
This partnership aligns closely with Japan’s broader strategic goal of establishing “sovereign AI,” which refers to a nation’s ability to develop and sustain its own AI capabilities, including foundational models and the underlying infrastructure. By building advanced AI systems domestically, Japan aims to ensure it can control its AI development and deployment while minimizing reliance on foreign providers. Jensen Huang, CEO of Nvidia, noted that the “AI industrial revolution has begun,” emphasizing the need to build the infrastructure to power it in Japan and globally.
The collaboration is a significant step in this direction, combining Fujitsu’s deep expertise in computing with Nvidia’s market-leading AI technology. Japan’s AI Strategy 2022 advances initiatives to foster an indigenous AI ecosystem, and partnerships like this are critical to achieving that ambition. The development of homegrown models like Fujitsu’s Takane, designed for high-stakes sectors, is a key part of this national strategy.
Building on a Supercomputing Legacy
This initiative builds upon Fujitsu’s long and successful history in high-performance computing (HPC). The company was central to the development of the Fugaku supercomputer, which was once the world’s fastest and is powered by the Fujitsu-designed, ARM-based A64FX processor. That experience with ARM architecture in a supercomputing context provides a strong foundation for the development of the Monaka CPU and its integration into next-generation systems.
Looking ahead, the partnership is not limited to current AI challenges. The companies have stated their intention to expand the collaboration into HPC and quantum computing, signaling a long-term vision for advanced computing infrastructure. This is further evidenced by plans for “FugakuNEXT,” Japan’s next flagship supercomputer, which will also feature a hybrid design. FugakuNEXT will pair a derivative of the Monaka processor, tentatively called “MONAKA-X,” with Nvidia GPUs and NVLink Fusion, solidifying this integrated architecture as a cornerstone of Japan’s future research and industrial ambitions.