Key Takeaway:
More than 850 global figures, including Nobel laureates, royals, military leaders, tech pioneers, and public personalities, have signed a statement urging a superintelligence ban until its safety can be guaranteed.
Superintelligence Ban Proposal – Key Points
Broad Coalition of Signatories
Over 850 signatories backed the statement on superintelligence-statement.org, coordinated by the Future of Life Institute (FLI). Signatories include:
Geoffrey Hinton (Nobel Prize–winning computer scientist), Yoshua Bengio, Apple co-founder Steve Wozniak, Virgin Group founder Richard Branson.
Prince Harry and Meghan Markle, alongside cultural/media figures such as Stephen Fry, Will.i.am, and Glenn Beck.
Former political and military leaders, including Steve Bannon, retired U.S. Joint Chiefs of Staff Chairman Mike Mullen, former U.S. National Security Adviser Susan Rice, and Nobel-winning physicist John Mather; signers also include Rev. Paolo Benanti (Vatican AI adviser) and several China-based AI researchers.
Multiple outlets reported “800+” signers as of initial release, rising to “850+” by Oct 22, 2025.
Core Demand
The statement calls for a superintelligence ban: a prohibition on the development of systems beyond human cognition until:
A broad scientific consensus ensures it can be pursued safely and controllably.
There is strong public buy-in and democratic oversight.
The letter underscores that AI’s trajectory is being set largely by private companies without broad public consent; organizers aim to broaden participation in decision-making.
Risks Highlighted
The signatories warn that unchecked pursuit of superintelligence raises risks of:
Human economic obsolescence and disempowerment.
Loss of freedom, civil liberties, dignity, and control.
National security threats.
Potential human extinction.
FLI’s executive director Anthony Aguirre (UC Santa Cruz) emphasized that AI developments are moving faster than the public can comprehend, arguing the superintelligence ban is necessary to open a society-wide conversation.
Historical Warning: Alan Turing
Alan Turing’s 1950s prediction is cited: once machines surpass human intelligence, their taking control may be inevitable.
Industry Context
Despite warnings, AI investment continues to accelerate:
Meta CEO Mark Zuckerberg said superintelligence is “now in sight.”
Elon Musk claimed digital superintelligence “is happening in real time,” even as Tesla develops humanoid robots.
OpenAI CEO Sam Altman expects superintelligence to emerge by 2030 at the latest, and announced in January 2025 that OpenAI was redirecting focus there.
Meta renamed its LLM division Meta Superintelligence Labs.
Yet, leading executives making such predictions did not sign the superintelligence ban statement.
Absent High-Profile Leaders
Although Altman (OpenAI) and Mustafa Suleyman (Microsoft AI) have both acknowledged existential dangers, neither they nor their companies signed.
AGI vs. Superintelligence Debate
- **Artificial General Intelligence (AGI):** Loosely defined as AI that matches human-level reasoning and task performance. Altman frames AGI as a step that could elevate humanity, distinct from a machine takeover.
- **Superintelligence:** Goes further, surpassing expert-level human cognition, which critics argue could outstrip human control entirely.
Parallel Calls for AI Regulation
- In September 2025, 200+ researchers and officials (including 10 Nobel laureates) issued a “red lines” letter warning about already-visible harms (mass unemployment, climate impacts, rights abuses).
- Economists warn of an AI investment bubble with systemic implications.
Organizer Independence and Funding Transparency
The Future of Life Institute highlighted its independence, noting early support from Elon Musk (2015) but stressing it does not accept funds from major AI firms. Its largest recent donor is Vitalik Buterin, co-founder of Ethereum.
Political and Public-Opinion Context
Aguirre called for engagement with governments in the U.S., China, and elsewhere, suggesting a future international treaty for advanced AI. An NBC News/SurveyMonkey poll (2025) found Americans split: 44% believe AI will improve their lives, while 42% believe it will worsen them.
Escalating Tensions with OpenAI
FLI disclosed that OpenAI issued subpoenas against the institute and its president in Oct 2025, a move OpenAI said related to nonprofit funding concerns. The dispute underscores the conflict between advocacy groups backing a superintelligence ban and leading AI firms pushing development forward.
Why This Matters:
The superintelligence ban proposal is one of the strongest coordinated calls yet to slow AI’s most advanced trajectory. With CEOs predicting superintelligence within five years, governments are under mounting pressure to implement legal safeguards and enforce democratic oversight before AI surpasses human control. The statement elevates the debate beyond technical circles, into global politics, finance, and culture.
This article was drafted with the assistance of generative AI. All facts and details were reviewed and confirmed by an editor prior to publication.
What is Artificial Intelligence? How does it work? This comprehensive guide will explain everything you need to know in a clear and concise way.
Read a comprehensive monthly roundup of the latest AI news!






