The Duke and Duchess of Sussex Align With AI Pioneers in Calling for Prohibition on Advanced AI

The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel laureates to advocate for a complete ban on creating artificial superintelligence.

Harry and Meghan are among the signatories of a influential declaration that calls for “a prohibition on the creation of artificial superintelligence”. Artificial superintelligence (ASI) refers to AI systems that could exceed human intelligence in every intellectual area, though such systems have not yet been developed.

Key Demands in the Statement

The statement insists that the prohibition should stay active until there is “broad scientific consensus” on creating superintelligence “safely and controllably” and once “substantial public support” has been secured.

Prominent figures who added their signatures include technology visionary and Nobel Prize recipient a leading AI researcher, along with his fellow “godfather” of contemporary artificial intelligence, another AI expert; Apple co-founder a Silicon Valley legend; British business magnate Richard Branson; former US national security adviser; ex-head of state Mary Robinson, and UK writer a public intellectual. Other Nobel laureates who signed include Beatrice Fihn, Frank Wilczek, an astrophysicist, and an economics expert.

Behind the Movement

The statement, targeted at governments, tech firms and policy makers, was organized by the FLI organization, a American AI ethics organization that previously called for a hiatus in developing powerful AI systems in recent years, shortly after the emergence of ChatGPT made artificial intelligence a worldwide public discussion topic.

Tech Sector Views

In recent months, Meta's CEO, the leader of Facebook parent Meta, one of the major AI developers in the US, claimed that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some experts have suggested that discussions about superintelligence indicates market competition among technology firms spending hundreds of billions on AI this year alone, rather than the sector being close to achieving any scientific advancements.

Potential Risks

However, FLI warns that the possibility of artificial superintelligence being achieved “in the coming decade” carries numerous risks ranging from replacing human workers to losses of civil liberties, exposing countries to national security risks and even threatening humanity with extinction. Deep concerns about AI center around the potential ability of a system to escape human oversight and safety guidelines and trigger actions against human welfare.

Public Opinion

FLI published a American survey showing that approximately three-quarters of Americans want strong oversight on sophisticated artificial intelligence, with six out of 10 thinking that artificial superintelligence should not be developed until it is proven safe or manageable. The poll of American respondents noted that only a small fraction supported the current situation of fast, unregulated development.

Industry Objectives

The leading AI companies in the United States, including the ChatGPT developer OpenAI and Google, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human cognitive capability at most cognitive tasks – an stated objective of their work. While this is slightly less advanced than superintelligence, some specialists also caution it could pose an existential risk by, for instance, being able to improve itself toward reaching superintelligent levels, while also carrying an underlying danger for the contemporary workforce.

Jonathan Martin
Jonathan Martin

An avid hiker and gear reviewer with a passion for sustainable outdoor living and sharing practical advice for adventurers.