Harry and Meghan Align With AI Pioneers in Demanding Prohibition on Advanced AI

The Duke and Duchess of Sussex have joined forces with artificial intelligence pioneers and Nobel Prize winners to push for a complete ban on developing superintelligent AI systems.

Harry and Meghan are part of the group of a powerful statement that demands “a ban on the creation of artificial superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human cognitive abilities in every intellectual area, though such systems have not yet been developed.

Primary Requirements in the Statement

The statement insists that the prohibition should stay active until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “substantial public support” has been secured.

Prominent figures who added their signatures include technology visionary and Nobel laureate a leading AI researcher, along with his fellow “godfather” of modern AI, another AI expert; tech entrepreneur Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; ex-head of state Mary Robinson, and UK writer a public intellectual. Other Nobel laureates who endorsed include a peace advocate, Frank Wilczek, an astrophysicist, and an economics expert.

Behind the Movement

The declaration, aimed at national leaders, tech firms and policy makers, was coordinated by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a pause in developing powerful AI systems in recent years, shortly after the launch of conversational AI made artificial intelligence a worldwide public discussion topic.

Industry Perspectives

In recent months, Mark Zuckerberg, the leader of Facebook parent Meta, one of the leading tech companies in the United States, stated that advancement toward superintelligent AI was “approaching reality”. However, some analysts have suggested that discussions about superintelligence reflects competitive positioning among tech companies spending hundreds of billions on AI recently, rather than the sector being near reaching any technical breakthroughs.

Potential Risks

Nonetheless, FLI warns that the possibility of artificial superintelligence being achieved “within the next ten years” carries numerous threats ranging from eliminating all human jobs to erosion of personal freedoms, exposing countries to national security risks and even threatening humanity with extinction. Existential fears about AI center around the potential ability of a system to evade human control and safety guidelines and trigger actions against human welfare.

Public Opinion

FLI released a US national poll showing that approximately three-quarters of Americans want robust regulation on sophisticated artificial intelligence, with 60% believing that artificial superintelligence should not be created until it is proven safe or controllable. The survey of 2,000 US adults added that only 5% backed the current situation of rapid, uncontrolled advancement.

Industry Objectives

The top artificial intelligence firms in the US, including the ChatGPT developer OpenAI and the search giant, have made the development of artificial general intelligence – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an explicit goal of their research. While this is slightly less advanced than ASI, some specialists also warn it could carry an existential risk by, for instance, being able to improve itself toward reaching superintelligent levels, while also presenting an implicit threat for the contemporary workforce.

Victoria Brooks
Victoria Brooks

A passionate traveler and writer sharing UK explorations and practical advice for memorable journeys.