🔗 Share this article The Duke and Duchess of Sussex Align With Tech Visionaries in Calling for Prohibition on Superintelligent Systems Prince Harry and Meghan Markle have teamed up with AI experts and Nobel laureates to advocate for a total prohibition on creating artificial superintelligence. The royal couple are among the signatories of a influential declaration that calls for “a prohibition on the creation of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human intelligence in all cognitive tasks, though such systems have not yet been developed. Key Demands in the Declaration The declaration states that the ban should remain in place until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “substantial public support” has been secured. Prominent figures who endorsed the statement include technology visionary and Nobel laureate a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; tech entrepreneur Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; former Irish president an international leader, and UK writer a public intellectual. Other Nobel laureates who signed include Beatrice Fihn, Frank Wilczek, an astrophysicist, and an economics expert. Behind the Movement The statement, targeted at governments, tech firms and lawmakers, was organized by the Future of Life Institute (FLI), a US-based AI safety group that earlier demanded a hiatus in developing powerful AI systems in recent years, shortly after the launch of conversational AI made AI a global political talking point. Industry Perspectives In July, Mark Zuckerberg, the chief executive of Facebook parent Meta, one of the leading tech companies in the US, stated that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some experts have argued that talk of ASI indicates competitive positioning among technology firms spending hundreds of billions on artificial intelligence this year alone, rather than the sector being close to achieving any scientific advancements. Potential Risks However, FLI warns that the prospect of ASI being achieved “within the next ten years” presents numerous threats ranging from eliminating all human jobs to erosion of personal freedoms, exposing countries to security threats and even threatening humanity with existential risk. Deep concerns about artificial intelligence focus on the possible capability of a AI system to evade human control and safety guidelines and initiate events contrary to human interests. Citizen Sentiment FLI published a US national poll showing that about 75% of Americans want strong oversight on advanced AI, with six out of 10 thinking that artificial superintelligence should not be created until it is proven safe or controllable. The poll of American respondents noted that only a small fraction supported the current situation of fast, unregulated development. Industry Objectives The leading AI companies in the United States, including the conversational AI creator OpenAI and the search giant, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human cognitive capability at many intellectual activities – an stated objective of their research. While this is slightly less advanced than ASI, some experts also caution it could pose an existential risk by, for instance, being able to improve itself toward reaching superintelligent levels, while also presenting an underlying danger for the contemporary workforce.