UK MP Asks Expert About Risks of AI and Is Told ‘It Could Kill Everyone’

UK MP Asks Expert About Risks of AI and Is Told ‘It Could Kill Everyone’
Ai-Da Robot, the world's first ultra-realistic humanoid robot artist, appears at a photo call in a committee room in the House of Lords in London on Oct. 11, 2022. (Rob Pinney/Getty Images)
Chris Summers
1/25/2023
Updated:
1/25/2023

A committee of MPs has been hearing evidence of the risks of a “dystopian future” in which artificial intelligence (AI) takes over the world and humans are wiped out, akin to the plot of the film “The Terminator.”

At a hearing of Parliament’s Science and Technology Committee, Conservative MP Tracey Crouch asked Michael Cohen, a doctoral candidate in Engineering Science at Oxford University, to “expand on some of the risks you think are posed by AI systems to their end users.”

Cohen replied, “There is a particular risk ... which is that it could kill everyone.”

He explained by using an allegory of training a dog with the use of treats as a reward.

Cohen said: “It will learn to pick actions that lead to getting treats, and we can do similar things with AI. But if the dog finds the treat cupboard it can get the treats itself without doing what we want it to do.”

He added, “If you imagine going into the woods to train a bear with a bag of treats, by selectively withholding and administering treats depending on whether it’s doing what you want it to do, what they will probably actually do is take the treats by force.”

Cohen warned of a paradigm shift where AI was capable of “taking over the process.”

He went on: “Then, if you have something much smarter than us, kind of monomaniacally trying to get this get this positive feedback however we have encoded it, and it’s taken over the world to secure that, it would direct as much energy as it could towards securing its hold on that and that would leave us without any energy for ourselves.”

Crouch then asked if the risk could be mitigated.

Actor Arnold Schwarzenegger (L) at a press conference on the film "Terminator Genisys" at the Ritz Carlton Hotel in Seoul on July 2, 2015. (Chung Sung-Jun/Getty Images for Paramount Pictures International)
Actor Arnold Schwarzenegger (L) at a press conference on the film "Terminator Genisys" at the Ritz Carlton Hotel in Seoul on July 2, 2015. (Chung Sung-Jun/Getty Images for Paramount Pictures International)

Cohen replied: “It can. What I’ve described is not applied to all forms of AI. So for instance, I was talking earlier about the economic benefits of human imitation of AI. If you’re training a human to imitate AI, it would not take over the world, any more than the human it’s imitating would. And so, that’s a different algorithm that gets encompassed under the very broad term AI.”

He said: “AI can cover prediction and it can cover planning mainly, and for things that are only doing prediction this is not an outcome that I think we should expect. It’s distinctly possible to develop regulation that prevents the sorts of dangerous AI that I’m talking about, while leaving open an enormous set of economically-valuable forms of AI.”

‘Bleak Scenario Is Realistic’

Later, Cohen said: “I think the bleak scenario is realistic because AI is attempting to model what makes humans special, that has led to humans completely changing the face of the earth. So if we’re able to capture that in the technology, of course, it’s going to pose just as much risk to us as we have posed to other species, the dodo for example.”

Michael Osborne, a professor of machine learning at Oxford University, told the same hearing there were many positives to be gained economically from AI, especially that it could do routine jobs at a much lower cost than a human workforce.

Osborne said: “AI doesn’t suffer from some of the problems that afflict human labour. It can work 24/7, doesn’t get distracted by kids in the background ... it can also operate in extreme environments, you can have AI on satellites, as indeed we’re doing.”

But he said, “AI is better thought of as an augmentation of human labour, as a collaborator with humans, and in that respect, it is already having an enormous impact.”

Asked if AI was in danger of taking over from humans in any occupation, Osborne gave the example of fashion models.

He said: “In 2013 we predicted that fashion models were highly automatable. We predicted they had a 90 percent probability of automated ability. And we were laughed at, but now of course, there are firms producing digital models with the aid of pure graphics software, that are able to pose in whichever clothes you want, to produce digital images that you can put up on your social media profile for actual fees from fashion brands.”

Cohen said the “economic output of horses” collapsed after the combustion engine was developed in the early part of the 20th century, but he said AI was not yet ready to replace humans as cars replaced horse-driven wagons and stagecoaches.

“Because AI isn’t at the level where it can do what we do,” Cohen said.

Later, Conservative MP Stephen Metcalfe said: “We have obviously heard about the dystopian future that AI will deliver where the machine takes over and we are all consigned to slavery. I’m not necessarily going to buy that that’s going to be the outcome. But obviously, to make sure that doesn’t happen, we need some form of regulation.”

Regulation to Develop AI for ‘General Good’

Metcalfe then asked two other expert witnesses, “So, when thinking about regulation of AI, what is it that the government should consider to make sure that we continue to develop AI for general good and not potentially leave the risks in place that could come and bite us 100 years from now?”

Katherine Holden, head of data analytics, AI, and digital identity at the trade association techUK, replied, “If we’re going to kind of rely on the existing structures we have in place, it’s absolutely integral that the regulators have the sufficient capacity to be able to govern AI effectively, and make sure that they have the ability to identify high risk applications.”

Manish Patel, CEO of AI firm Jiva, said Britain should not produce legislation which “stifled innovation,” but he said it needed to be aware of products that produce “human-like intelligence,” and said when they got “out in the wild,” “that’s the point where you want to intervene.”

Chris Summers is a UK-based journalist covering a wide range of national stories, with a particular interest in crime, policing and the law.
Related Topics