AI: Security Minister Tom Tugendhat Says ‘Genie Won’t Go Back in the Bottle’

AI: Security Minister Tom Tugendhat Says ‘Genie Won’t Go Back in the Bottle’
Aidan Meller looks at a painting by Ai-Da Robot, an ultra-realistic humanoid robot artist, during a press call at the British Library in London, on April 4, 2022. (Hollie Adams/Getty Images)
Chris Summers
4/20/2023
Updated:
4/20/2023

Security Minister Tom Tugendhat has said it is too late to suspend or halt the development of artificial intelligence (AI) because of fears about how it will be used.

Italy last month said it would temporarily block the AI software ChatGPT because of unlawful data collection and problems in its age verification system.

But Tugendhat, speaking at the CyberUK conference in Belfast, said: “Given the stakes, we can all understand the calls to stop AI development altogether. But the genie won’t go back in the bottle any more than we can write laws against maths.”

Britain's Minister of State for Security Tom Tugendhat arrives to attend the first Cabinet meeting under the new Prime Minister, Rishi Sunak in 10 Downing Street, London, on Oct. 26, 2022. (Niklas Halle'n/AFP via Getty Images)
Britain's Minister of State for Security Tom Tugendhat arrives to attend the first Cabinet meeting under the new Prime Minister, Rishi Sunak in 10 Downing Street, London, on Oct. 26, 2022. (Niklas Halle'n/AFP via Getty Images)

AI systems that power customer service chatbots, known as large language models, have ingested millions of digital books, letters, and messages which enable them to mimic human writing styles.

Tugendhat said criminals and hackers are aware of how to exploit AI, adding: “Cyber attacks work when they find vulnerabilities. AI will cut the cost and complications of cyber attacks by automating the hunt for the chinks in our armour.”

He said: “Already AI can confuse and copy, spreading lies, and committing fraud. Natural language models can mimic credible news sources, pushing disingenuous narratives at huge scale, and AI image and video generation will get better.”

Tugendhat—who stood unsuccessfully for the leadership of the Conservative Party last year—said Russia and China are both exploring malevolent uses of AI.

He said: “Putin has a longstanding strategic interest in AI and has commented that whoever becomes leader in this sphere will rule the world.”

“China, with its vast datasets and fierce determination, is a strong rival. But AI also threatens authoritarian controls. Other than the United States, the UK is one of only a handful of liberal democratic countries that can credibly lead the world in AI development,” added Tugendhat.

He warned, “We can stay ahead, but it will demand investment and co-operation, and not just by government.”

Tugendhat also said it is essential that by the time AGI (artificial general intelligence) is invented, “we are confident that it can be safely controlled and aligned to our values and interests.”

Stopping AI Akin to ‘King Canute’

“Solving this issue of alignment is where our efforts must lie, not in some King Canute-like attempt to stop the inevitable but in a national mission to ensure that, as super-intelligent computers arrive, they make the world safer and more secure,” he added.

The CyberUK conference was dominated by debates about Chinese and Russian cyber threats.

Earlier this week, Lindy Cameron, head of the National Cyber Security Centre, said more needs to be done to protect Britain and British companies from the threat posed by cyber groups loyal to Moscow.

The Chancellor of the Duchy of Lancaster, Oliver Dowden, also said Britain’s critical infrastructure is vulnerable to attack by a “cyber equivalent of the Wagner Group.”

In January, a committee of MPs heard evidence of the risks of a “dystopian future” in which AI takes over the world and humans become an endangered species, akin to the plot of the film “The Terminator.”

‘It Could Kill Everyone’

At a hearing of Parliament’s Science and Technology Committee, Conservative MP Tracey Crouch asked Michael Cohen, a doctoral candidate in Engineering Science at Oxford University, to “expand on some of the risks you think are posed by AI systems to their end users.”

Cohen replied, “There is a particular risk ... which is that it could kill everyone.”

He explained by using an allegory of training a dog with the use of treats as a reward.

Cohen said: “It will learn to pick actions that lead to getting treats, and we can do similar things with AI. But if the dog finds the treat cupboard, it can get the treats itself without doing what we want it to do.”

He added, “If you imagine going into the woods to train a bear with a bag of treats, by selectively withholding and administering treats depending on whether it’s doing what you want it to do, what they will probably actually do is take the treats by force.”

Cohen warned of a paradigm shift where AI is capable of “taking over the process.”

He went on: “Then, if you have something much smarter than us monomaniacally trying to get this positive feedback however we have encoded it, and it’s taken over the world to secure that, it would direct as much energy as it could towards securing its hold on that and that would leave us without any energy for ourselves.”

PA Media contributed to this report.