Key Takeaway
OpenAI and NVIDIA have signed a landmark agreement to build at least 10 gigawatts of AI data centers using NVIDIA hardware, backed by up to $100 billion of NVIDIA investment, a move that cements their role at the center of the global AI race, while raising unprecedented challenges in cost, energy, and regulation.
The $100B OpenAI – NVIDIA deal blends equity, hardware purchases, and potential GPU leasing into a circular financing structure, signaling a new model for funding AI at planetary scale.
OpenAI – NVIDIA Deal – Key Points
Scale of Deployment
The OpenAI – NVIDIA deal will deploy at least 10 GW of NVIDIA systems, equal to 4–5 million GPUs, for OpenAI’s next-generation AI infrastructure. The first 1 GW deployment is scheduled for the second half of 2026 on NVIDIA’s Vera Rubin platform. For comparison, 10 GW ≈ 10 nuclear reactors and power equivalent to 8+ million U.S. households; it dwarfs today’s largest data centers (typically 50–100 MW), implying electricity demand on the scale of multiple major cities.
NVIDIA’s $100 Billion Investment
NVIDIA plans to invest up to $100 billion in OpenAI, structured via GPU/system sales and non-controlling, non-voting equity stakes. The first $10 billion begins once a definitive GPU purchase agreement is signed. Commentators describe the OpenAI – NVIDIA deal as a “circular” structure—NVIDIA invests in OpenAI, which spends heavily on NVIDIA systems—potentially advantageous for NVIDIA’s economics.
Financing Model Innovation
The $100B OpenAI – NVIDIA deal introduces a financing architecture unusual even in technology megaprojects. Structured as two intertwined flows (cash from OpenAI to buy GPUs and non-voting equity from NVIDIA into OpenAI) it ensures capital recycles within the partnership. Reports of GPU leasing add another layer, turning hardware into a recurring-revenue service and easing OpenAI’s upfront burden. This combination blends vendor financing, equity exposure, and hardware-as-a-service into a circular model that could become a template for future AI factory deployments worldwide.
True Cost Scale
Jensen Huang previously estimated $50–60B to build 1 GW of data center capacity, with about $35B for NVIDIA chips/systems. Extrapolated, the 10 GW vision could imply $500B+ total capex when including facilities and power, far beyond the headline investment.
Financial and Market Impact
Post-announcement (22–23 Sep 2025), NVIDIA +3.9% (~+$170B market cap). The news sparked a global chip rally: TSMC +3.5%, SK Hynix +2.5%, Samsung +1.4%, Tokyo Electron higher; in Europe STMicro, Infineon, BE Semiconductor opened higher, while ASM International warned on Q4, weighing on ASML. Separately, NVIDIA shares rose up to 4.4% intraday; Oracle +~6% on related AI-infrastructure optimism (Stargate). Analysts (Quilter Cheviot, ODDO BHF) framed AI as a broad, non-zero-sum trade benefiting equipment suppliers via TSMC demand pull.
Statements from Leadership
- Jensen Huang (NVIDIA CEO): “This is a giant project… deploying 10 gigawatts to power the next era of intelligence.”
- Sam Altman (OpenAI CEO): “Compute infrastructure will be the basis for the economy of the future.”
- Greg Brockman (OpenAI President): “We’re excited to deploy 10 gigawatts of compute with NVIDIA to push back the frontier of intelligence.”
Strategic Context and Collaborators
OpenAI designates NVIDIA as its preferred compute and networking partner, aligning model/infrastructure software with NVIDIA hardware/software. The alliance complements Microsoft, Oracle, SoftBank, and Stargate partners building “AI factories.” Industry peers are pursuing nuclear-adjacent power (e.g., Microsoft 835 MW Three Mile Island, AWS up to 960 MW Susquehanna) to meet escalating AI loads.
Energy and Environmental Challenges
Data centers already consumed ~1.5% of global electricity (2024); IEA projects up to 945 TWh by 2030. Grid interconnect queues and power-constrained markets are bottlenecks. Comparables include Wyoming’s planned 10 GW AI campus that could out-consume state households. Altman’s earlier mega-campus vision (five to seven 5-GW sites) would rival the electricity use of New York State, underscoring environmental and policy stakes.
New Business Model: Chip Leasing
According to The Information NVIDIA has discussed GPU chip leasing for OpenAI, creating recurring revenue, reducing OpenAI’s upfront outlays, and deepening dependency on NVIDIA’s stack, alongside the equity-plus-systems structure.
Regulatory and Antitrust Concerns
The combined heft of Microsoft–OpenAI–NVIDIA had already prompted DOJ/FTC pathways for scrutiny (June 2024). The non-voting equity, circular cash flows, potential leasing, and hyperscale consolidation will likely intensify antitrust and competition reviews in the U.S. and abroad.
Competitive Landscape
Days earlier, NVIDIA invested $5B for ~4% of Intel, with plans for custom x86 CPUs for NVIDIA AI platforms and possible x86 APUs (Intel CPUs + NVIDIA GPU chiplets). NVLink integration is viewed as central to extending NVIDIA systems across NVIDIA CPUs and Intel Xeons. Market reaction reportedly added ~$150B to NVIDIA’s value on the Intel news.
Governance and Capital Structure
OpenAI was most recently valued at $500B and is progressing through a conversion to a for-profit entity. Microsoft retains economic exposure via a 49% profit share linked to its $13B (2023) support, while NVIDIA previously invested $6.6B (2024). Under the new LOI, NVIDIA’s non-voting shares preserve OpenAI’s control structure while aligning incentives for long-term compute buildout.
User Adoption
OpenAI reports ~700 million weekly active users across enterprises, SMBs, and developers. The 10-GW compute expansion is intended to support this demand and advance the mission to develop AGI “that benefits all of humanity.”
Why This Matters
This is among the largest technology infrastructure commitments ever attempted. By linking investment directly to hardware procurement, the OpenAI – NVIDIA deal illustrates how future AI projects may be financed, spreading capex, locking in vendor control, and accelerating deployment at unprecedented speed. It reframes AI infrastructure as not only a technical race but also a financial innovation, shaping how capital flows into the next generation of “AI factories.”ry endurance.
This article was drafted with the assistance of generative AI. All facts and details were reviewed and confirmed by an editor prior to publication.
All you need to know about the critical components of AI infrastructure, hardware, software, and networking, that are essential for supporting AI workloads.
Explore the vital role of AI chips in driving the AI revolution, from semiconductors to processors: key players, market dynamics, and future implications.
Read a comprehensive monthly roundup of the latest AI news!






