UK Market Regulator Launches Review of AI Tech

UK Market Regulator Launches Review of AI Tech
An AI robot titled "Alter 3: Offloaded Agency," is pictured during a photocall to promote the exhibition entitled "AI: More than Human," at the Barbican Centre in London on May 15, 2019. (Ben Stansall/AFP via Getty Images)
Evgenia Filimianova
5/4/2023
Updated:
5/4/2023

The Competition and Market Authority (CMA) will investigate what the development of artificial intelligence (AI) means for UK consumers and market competition.

Requested by the government, the review of generative AI, such as seemingly new and realistic text, images, or audio, was announced on May 4.

Westminster seeks to understand and ensure that innovations in AI serve the UK economy, consumers, and businesses without breaching appropriate transparency and security.

The initial review will examine how the competitive markets for foundation models and their use could evolve.

Foundation models are learning models trained on vast volumes of data to handle a variety of tasks, from translating texts to analysing medical images to creating original music scores and film scripts.

CMA is to suggest how to support healthy competition and protect consumers across the UK, as AI foundation models, such as the one underpinning the popular ChatGPT chatbot, keep developing.

“AI has burst into the public consciousness over the past few months but has been on our radar for some time. It’s a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth,” CMA’s chief executive Sarah Cardell said.

She added that while it would be beneficial for UK businesses to utilise AI tech there still was a concern for the consumers’ safety when it comes to issues like dealing with false or misleading information.

“Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection,” Cardell said.

Difference of Approach

In March 2023, the UK government published its white paper on AI, setting out a pro-innovation framework for AI use by existing regulators in the sectors where AI is applied.
This is a different approach to that of the European Union and its EU AI Act. Europe’s proposed AI legislation will require programmers working on high-risk applications to document, test, and take other safety measures.
In April, European lawmakers, working on the new legislation, called on world leaders, including U.S. President Joe Biden and European Commission President Ursula von der Leyen, to hold a summit to discuss AI regulation and development control.
This came just a few weeks after an open letter, published by the Future of Life Institute (FLI), called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

The letter was signed by Spacex and Twitter CEO Elon Musk, co-founder of Apple Steve Wozniak, and thousands of other signatories.

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders,” the letter said.

Meanwhile, the discussion of AI risks and development is also rife within the walls of Westminster. On May 3, British MPs addressed the dangers of AI and the need for regulation during the House of Commons questions session with the Science secretary Chloe Smith.

Smith said that the government recognises the risks posed by many technologies when in the wrong hands.

“The UK is a global leader in AI, with the strategic advantage that places us at the forefront of these developments. Now, through UK leadership, including at the OECD and the G-7, the Council of Europe, and more, we are promoting our vision for a global ecosystem that balances innovation and the use of AI underpinned by our shared values, of course, of freedom, fairness, and democracy. Our approach will be proportionate, pro-innovative, and adaptable,” she added.

Britain’s AI strategy continues to be shaped by existing regulatory bodies, including the Office for AI and fellow members of the Digital Regulation Cooperation Forum (DRCF).

The CMA will work with the agencies and other stakeholders to publish its findings in September 2023.

Evgenia Filimianova is a UK-based journalist covering a wide range of national stories, with a particular interest in UK politics, parliamentary proceedings and socioeconomic issues.
Related Topics