Nvidia Introduces NVLink Fusion and DGX Cloud Lepton to Expand AI Ecosystem

At Computex 2025, Nvidia introduced NVLink Fusion, enabling integration of third-party CPUs and AI accelerators with its GPUs in rack-scale systems, and launched DGX Cloud Lepton, a compute marketplace linking developers to global GPU resources—reinforcing Nvidia’s central role in the future of AI infrastructure.

Nvidia Introduces NVLink Fusion - Credit - ChatGPT, The AI Track
Nvidia Introduces NVLink Fusion - Credit - ChatGPT, The AI Track

Nvidia Introduces NVLink Fusion – Key Points

  • NVLink Fusion Enables Hybrid AI Systems:

    NVLink Fusion allows integration of non-Nvidia CPUs and custom accelerators with Nvidia GPUs, opening Nvidia’s proprietary high-speed interconnect to external partners. This includes support for CPUs from Qualcomm and Fujitsu, and AI accelerators from Marvell, MediaTek, and Alchip. Chip software providers Cadence and Synopsys are contributing design tools and IP to the ecosystem. Nvidia’s proprietary NVLink interface offers up to 14x the bandwidth of PCIe and now extends to full rack-scale architectures through NVLink Switch silicon.

  • Fujitsu’s and Qualcomm’s Custom AI Processors Join the Ecosystem:

    Fujitsu integrates NVLink into its 144-core Monaka CPU, a 2nm Arm-based processor featuring 3D-stacked CPU-over-memory for power efficiency. Qualcomm, re-entering the server CPU market, joins the Fusion ecosystem to enable its new chips to operate in Nvidia-based AI systems, beginning with deployments like Saudi Arabia’s AI cloud project.

  • Broader Silicon Support and Rack-Scale Capabilities:

    NVLink Fusion’s architecture includes a chiplet-based interface positioned adjacent to the compute package, allowing integration at scale. Interconnect providers like Astera Labs are involved to support communication, enabling a broader range of AI accelerators (e.g., ASICs) to operate in tandem with Nvidia’s GPUs. The move addresses AI data centers’ growing demand for semi-custom infrastructure design.

  • DGX Cloud Lepton Connects Developers to GPU Resources:

    Nvidia’s new DGX Cloud Lepton platform acts as a compute marketplace, linking developers to tens of thousands of GPUs from partners such as CoreWeave, Crusoe, Foxconn, SoftBank, and others. It addresses the critical bottleneck of reliable GPU access by unifying AI service orchestration across Nvidia’s partner networks.

  • Grace Blackwell GB300 System Enhancements:

    The upcoming GB300, expected in Q3 2025, will offer significant system-level performance improvements for advanced AI workloads. This follows previous announcements of Nvidia’s Grace Hopper chips and enhanced NVLink-powered compute nodes.

  • Foxconn Partnership for AI Supercomputer in Taiwan:

    Nvidia and Foxconn will build a large-scale AI supercomputer in Taiwan using 10,000 Blackwell chips. This project, part of Taiwan’s national AI infrastructure initiative, underscores Nvidia’s growing footprint in sovereign compute and public-private tech development.

  • Nvidia Mission Control Software Launch:

    Nvidia is rolling out Mission Control, a new software stack for managing orchestration, validation, and deployment across GPU-powered systems—aiming to speed up time-to-market and improve workload optimization in enterprise environments.

  • Competitive Landscape:

    Major rivals Intel, AMD, and Broadcom are absent from the NVLink Fusion ecosystem. These companies back the competing UALink Consortium, which is building an open-standard interconnect capable of linking up to 1,024 GPUs at 200 GT/s bandwidth. NVLink remains proprietary, but Fusion makes it more accessible than ever.


Why This Matters:

Nvidia is redefining AI infrastructure by blending its historically closed ecosystem with broader hardware collaboration. NVLink Fusion unlocks the high-bandwidth advantages of NVLink for third-party CPUs and ASICs—boosting interoperability while retaining Nvidia’s performance edge. The launch of DGX Cloud Lepton and Mission Control solidifies Nvidia’s vertical integration, from chips to cloud infrastructure and software. With major players now relying on Nvidia’s ecosystem, and strategic moves like its Taiwan supercomputer, Nvidia is reinforcing its dominance while navigating growing industry pressure for open standards.

Nvidia to launch a downgraded H20 AI chip in China in July 2025 to navigate U.S. export controls and protect $18B in backlogged orders.

Nvidia will manufacture AI chips and supercomputers in the U.S., investing $500B through 2029. New factories, robotics, and Omniverse tech power the move.

Apple partners with Broadcom to develop Baltra, an AI server chip ready by 2026, while Siri integrates ChatGPT to revolutionize user interaction with AI

Read a comprehensive monthly roundup of the latest AI news!

The AI Track News: In-Depth And Concise

Scroll to Top