Quick Summary
- Over 850 public figures signed a statement demanding an AI superintelligence ban.
- Tech leaders like Steve Wozniak, Richard Branson, and top AI researchers voiced concern.
- The petition cites risks including civil liberty loss, economic disruption, and existential threats.
- It calls for a pause until there is proven safety and public support.
- A growing number of people support regulating superintelligent AI before it advances further.
What Is AI Superintelligence?
AI superintelligence refers to a potential future form of artificial intelligence that outperforms humans in every cognitive task, from problem-solving and reasoning to decision-making and innovation.
Unlike today’s AI tools (which are narrow or specialized), superintelligent systems could possess:
- The ability to self-improve rapidly
- Strategic decision-making skills
- Broad general intelligence beyond any single domain
It’s not science fiction anymore. As companies like OpenAI, Meta, and xAI build larger and more capable models, the reality of superintelligence is moving from theory to possibility.
Why Are Experts Calling for a AI Superintelligence Ban?
In October 2025, over 850 respected figures across tech, academia, and politics signed a joint petition titled the “Statement on AI Superintelligence.” The message was simple:
Stop developing AI superintelligence until we can ensure it’s safe and controllable and until the public consents to its creation.
The petition argues that racing to build superhuman AI without proper oversight or safety mechanisms could lead to consequences far beyond what we’re prepared to handle.
Who Signed the Petition?
The list of signatories includes a who’s who of technology pioneers and global influencers:
- Steve Wozniak, co-founder of Apple
- Richard Branson, founder of Virgin Group
- Geoffrey Hinton and Yoshua Bengio, pioneers of modern AI research
- Stuart Russell, AI safety expert and UC Berkeley professor
- Susan Rice, former U.S. National Security Advisor
- Mike Mullen, retired Chairman of the Joint Chiefs of Staff
- Prince Harry and Meghan Markle, Duke and Duchess of Sussex
- Steve Bannon and Glenn Beck, high-profile political figures
What’s powerful about this group is not just their reputation; it’s their diversity. From left to right, from academia to royalty, this petition crossed typical ideological lines.
The Public’s View on AI Superintelligence
According to a survey of 2,000 U.S. adults:
- Only 5% support the current pace of unregulated AI development
- Most respondents want government regulation of advanced AI
- A majority believe superintelligent AI should not be built until it is proven safe and controllable
These findings highlight a growing concern among everyday citizens, not just experts. People want progress, yes, but they also want accountability, transparency, and safety.
The Risks of Superintelligent AI
The petition outlines a range of possible threats associated with unregulated AI superintelligence:
1. Loss of Human Control
Superintelligent systems might begin making decisions beyond our comprehension or influence.
2. Civil Liberties and Autonomy
AI could be used for mass surveillance, manipulation, or undermining democratic processes.
3. Economic Obsolescence
Advanced AI could replace humans in decision-making and high-skill jobs, creating massive job loss and social inequality.
4. Security Threats
Autonomous weapons or cyber-AI could pose national and global security risks.
5. Existential Danger
Some experts, including OpenAI’s own CEO Sam Altman, have warned that AI might pose a threat to humanity itself if misaligned with human values.
As Altman wrote in 2015:
“Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”
Even Elon Musk said earlier this year that there’s only a 20% chance we come through the AI transition safely. Whether that’s pessimism or realism, it adds urgency to the conversation.
AI Boomers vs AI Doomers
In the tech world, there’s a visible divide emerging:
AI Boomers
Believe AI will solve everything, from healthcare to climate change. They argue that delays or regulations will slow innovation and harm global competitiveness.
AI Doomers
Believe that building superintelligence too quickly, without rules or public oversight, is a dangerous game. They see the need for firm limits and strong safety protocols.
Both sides include brilliant minds. Even those building today’s most advanced models like Musk, Altman, and Demis Hassabis have acknowledged the risks publicly. The tension is not about whether AI is powerful, but whether we’re mature enough to wield it responsibly.
What This Means for the Future
The AI superintelligence ban petition doesn’t call for an end to all AI development. It doesn’t reject innovation or progress. Instead, it calls for:
- A temporary halt on superintelligence efforts
- Public involvement in shaping AI’s future
- Scientific consensus before taking further steps
This moment is not just about technology, it’s about governance, ethics, and who gets to shape the future of intelligence itself.
Whether you’re a founder, policymaker, or simply someone trying to keep up, the petition serves as a wake-up call: the time to ask hard questions is now, not after the genie is out of the bottle.
Final Thoughts
The call for an AI superintelligence ban in 2025 is not fear-mongering. It’s a reasoned plea from those who understand both the potential and the peril of AI.
It’s not about stopping AI altogether. It’s about pressing pause, just long enough to make sure we’re steering the ship before it sails into uncharted waters. The future of superintelligent AI is still being written. The real question is: Who gets to hold the pen?
Discover how AI is reshaping technology, business, and healthcare—without the hype.
Visit InfluenceOfAI.com for easy-to-understand insights, expert analysis, and real-world applications of artificial intelligence. From the latest tools to emerging trends, we help you navigate the AI landscape with clarity and confidence.