UK to Host 1st Global Summit on AI

UK to Host 1st Global Summit on AI
Ultra-realistic AI robot Ai-Da poses in front of a painting she made during the press preview of the London Design Biennale 2023 at Somerset House, central London, on June 1, 2023. June 1 to June 25 the Somerset House hosts London Design Biennale. (Photo by Ben Stansall / AFP) / RESTRICTED TO EDITORIAL USE - MANDATORY MENTION OF THE ARTIST UPON PUBLICATION - TO ILLUSTRATE THE EVENT AS SPECIFIED IN THE CAPTION (Photo by BEN STANSALL/AFP via Getty Images)
Evgenia Filimianova
6/8/2023
Updated:
6/8/2023

The UK will host the first major global summit on AI safety this autumn, Prime Minister Rishi Sunak has announced, vowing to take a “coordinated approach” on emerging tech with U.S. President Joe Biden.

The announcement came ahead of the meeting between the two leaders in Washington on Wednesday.

The UK will be at the forefront of harnessing the benefits of AI safely and securely, said the prime minister.

“Time and time again throughout history we have invented paradigm-shifting new technologies and we have harnessed them for the good of humanity. That is what we must do again. No one country can do this alone. This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way,” Sunak said.

The summit will be attended by key countries, leading tech companies and researchers, said the government. The attendees will consider risks posed by AI and discuss how they can be mitigated through a shared approach globally.

Whitehall said that development of an international regulatory framework for AI is one of the key discussion points, to be stressed by Sunak in Washington.

It comes after one of the prime minister’s tech advisers warned of the risks posed by AI to humanity, unless properly controlled and regulated. Chair of Foundation Model Taskforce Matt Clifford said that the focus should be on controlling AI models that can pose “very dangerous threats to humans.”

Senior AI experts, including those at Google DeepMind and Anthropic, also issued a warning earlier this month, cautioning about “the risk of extinction from AI.”

Demis Hassabis, co-founder of Google's artificial intelligence startup DeepMind speaks during a press conference after finishing the final match of the Google DeepMind Challenge Match against Google's artificial intelligence program, AlphaGo, in Seoul, South Korea, on March 15, 2016. (Jeon Heon-Kyun-Pool/Getty Images)
Demis Hassabis, co-founder of Google's artificial intelligence startup DeepMind speaks during a press conference after finishing the final match of the Google DeepMind Challenge Match against Google's artificial intelligence program, AlphaGo, in Seoul, South Korea, on March 15, 2016. (Jeon Heon-Kyun-Pool/Getty Images)

Google DeepMind and Anthropic bosses welcomed the government’s decision to hold an international AI summit.

“The Global Summit on AI Safety will play a critical role in bringing together government, industry, academia and civil society, and we’re looking forward to working closely with the UK Government to help make these efforts a success,” CEO and co-founder of Google DeepMind Demis Hassabis said in a statement.

Anthropic CEO Dario Amodei said there was much work to be done in order to make AI safe and commended Sunak on “bringing the world together to find answers.”

The government deems the UK, which ranks third in AI research and development behind the United States and China, to be “well-placed” to lead discussions on the future of the technology.

“The UK and US are two of the only three countries in the world to have a tech industry valued at more than $1 trillion. This is thanks, in part, to the strength of our universities and research institutions – between us, our countries are home to 7 of the world’s top 10 research universities,” said the announcement.

‘Too Ambitious’

The UK’s ambition to lead the AI conversation may just be too much, suggested Yasmin Afina, research fellow at Chatham House’s Digital Society Initiative. Speaking about the AI regulation differences in the UK, United States, the EU, and the rest of the world, Afina said that Britain could find itself in a difficult position.
“Instead of trying to play a role that would be too ambitious for the UK and risks alienating it, the UK should perhaps focus on promoting responsible behaviour in the research, development, and deployment of these technologies,” she told the BBC.

In contrast to the EU, whose approach to tech regulation is more protective, the UK and the United States are more sector-based and self-regulatory when it comes to AI.

Proposed in April 2021, the EU AI Act classifies AI applications in three risk categories. Systems that create unacceptable risk, such as such as government-run social scoring, are banned, while high-risk applications are legally regulated and the rest are left unregulated.
The EU’s governance plan is on the collaboration established between the European Commission and member states, while the UK’s approach relies on collaboration between government, regulators, and business.
“Initially, we do not intend to introduce new legislation. By rushing to legislate too early, we would risk placing undue burdens on businesses,” the government’s policy white paper on AI regulation said.
The United States released the Blueprint for an AI Bill of Rights in October last year, providing a framework on how the government, tech companies, and the public can collaborate to make AI applications safe. Its principles are non-regulatory and non-binding as the bill of rights has not yet been enforced.
Evgenia Filimianova is a UK-based journalist covering a wide range of national stories, with a particular interest in UK politics, parliamentary proceedings and socioeconomic issues.
Related Topics