Harry and Meghan Align With Tech Visionaries in Calling for Ban on Superintelligent Systems
The Duke and Duchess of Sussex have teamed up with AI experts and Nobel Prize winners to advocate for a total prohibition on developing superintelligent AI systems.
Harry and Meghan are among the signatories of a powerful statement that demands “a ban on the creation of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that would surpass human cognitive abilities in all cognitive tasks, though this technology remain theoretical.
Key Demands in the Declaration
The declaration states that the prohibition should stay active until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “strong public buy-in” has been achieved.
Notable individuals who added their signatures include technology visionary and Nobel Prize recipient Geoffrey Hinton, along with his fellow “godfather” of modern AI, Yoshua Bengio; tech entrepreneur a Silicon Valley legend; UK entrepreneur Virgin founder; former US national security adviser; former Irish president an international leader, and British author a public intellectual. Other Nobel laureates who endorsed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron Acemoğlu.
Behind the Movement
The declaration, aimed at national leaders, technology companies and lawmakers, was coordinated by the FLI organization, a US-based AI safety group that earlier demanded a pause in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made artificial intelligence a worldwide public talking point.
Tech Sector Views
In July, Meta's CEO, the chief executive of the social media giant, one of the major AI developers in the United States, claimed that development of superintelligence was “approaching reality”. Nevertheless, some experts have suggested that discussions about superintelligence indicates market competition among tech companies spending hundreds of billions on AI recently, rather than the industry being close to achieving any scientific advancements.
Possible Dangers
However, the organization states that the possibility of ASI being achieved “in the coming decade” carries numerous risks ranging from replacing human workers to losses of civil liberties, exposing countries to national security risks and even endangering mankind with existential risk. Existential fears about artificial intelligence center around the potential ability of a system to escape human oversight and protective measures and trigger actions contrary to human interests.
Citizen Sentiment
The institute published a US national poll showing that about 75% of US citizens want robust regulation on advanced AI, with six out of 10 thinking that superhuman AI should not be developed until it is demonstrated to be secure or controllable. The survey of 2,000 US adults added that only a small fraction supported the current situation of rapid, uncontrolled advancement.
Industry Objectives
The top artificial intelligence firms in the United States, including the ChatGPT developer OpenAI and Google, have made the creation of human-level AI – the theoretical state where AI matches human levels of intelligence at many intellectual activities – an stated objective of their research. While this is slightly less advanced than ASI, some specialists also warn it could carry an existential risk by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an implicit threat for the contemporary workforce.