Senior Army Officer Says AI Weapons Need to be Safe so Armed Forces can ‘Sleep at Night’

Defence Procurement Minister James Cartlidge said AI could also be used to carry out tasks and keep humans “out of harm’s way.”
Senior Army Officer Says AI Weapons Need to be Safe so Armed Forces can ‘Sleep at Night’
A Phalanx gun—which contains autonomous weapon capability—is fired by gunners on board HMS Albion during a NATO exercise in the Baltic Sea on June 7, 2019. (Ministry of Defence)
Chris Summers
9/7/2023
Updated:
9/15/2023
0:00

A senior army officer has told a parliamentary committee the armed forces have to have confidence in weapons which rely on artificial intelligence (AI) if they are to “sleep at night.”

Lieutenant General Tom Copinger-Symes, the deputy commander of the UK Strategic Command, was giving evidence to the House of Lords’ AI in weapons committee on Thursday and was asked a question by Lord Hamilton, a former Conservative MP, who referred to an incident earlier this year.

Lord Hamilton—who was a defence minister under Sir John Major in the 1990s—said: “There was a report some months ago in the papers, the Americans were trialling an AI system and it basically went completely AWOL, blew up the operator, killed the operator, and then blew itself up afterwards. The Americans later denied it ever happened ... Let’s hypothesise that this has happened in the M.O.D. What would you then do if that happened?”

Lieut. Gen. Copinger-Symes said: “I don’t know about the incident you’re referring to, but we have very tried and tested procedures for dealing with incidents like that and working out what lessons we learn, and how we take them forward to make sure they’re safe and responsible.”

“Again, not least because if you don’t, your soldiers, sailors and aviators won’t trust that bit of kit and they won’t use that bit of kit, which is one of the reasons we take it so seriously. But above all, they won’t sleep at night, if they don’t know that bit of kit is really achieving what we need safely and responsibly,” he added.

‘Public Anxiety About Artificial Intelligence’

Later, Defence Procurement Minister James Cartlidge, told the committee: “We recognise as a government there is public anxiety about artificial intelligence. That is precisely why the prime minister will be holding an international summit in the autumn, about AI safety.”

Mr. Cartlidge said AI defence systems would not be discussed at that summit but he said, “Nevertheless it’s a very important statement of the government’s overall commitment to ensuring there is public confidence in the way we explore AI.”

A screen grab of (L to R) Lieutenant General Tom Copinger-Symes, Defence Procurement Minister James Cartlidge and second permanent secretary at the Ministry of Defence, Paul Lincoln, giving evidence to a House of Lords committee on AI weapons on Sep. 7, 2023. (Parliament TV)
A screen grab of (L to R) Lieutenant General Tom Copinger-Symes, Defence Procurement Minister James Cartlidge and second permanent secretary at the Ministry of Defence, Paul Lincoln, giving evidence to a House of Lords committee on AI weapons on Sep. 7, 2023. (Parliament TV)

He said that while the aim of using AI in weapons was partly to give Britain an edge over its competitors, they could also be used to carry out “mundane tasks” which freed up service personnel for other roles and AI could also be used to keep humans “out of harm’s way” such as by defusing ordnance.

Mr. Cartlidge said: “The Royal Navy has a gun called Phalanx which contains in its potential use a  capability which can arguably be described, for part of its use, as partly autonomous/automated. But the crucial thing is that it can only operate if there is appropriate human involvement ... it has to be switched on.”

Lord Houghton of Richmond, a former army officer and chief of the defence staff, asked Mr. Cartlidge about the “holistic regulatory framework” surrounding AI weapons systems.

Cartlidge Says UK Must ‘Stay Ahead of Our Adversaries’

Mr. Cartlidge replied: “To be absolutely clear to you, as far as I am concerned, we must not in any way act naively or put restraints on our country in terms of its ability to exploit AI within the bounds and parameters of international law, but in a way that ensures absolutely we stay ahead of our adversaries.”

He said “We only have to look at what’s happening in Ukraine. There is some intelligence potentially about AI used by Russia ... but irrespective of that in a situation like this, where you know they are operating in a fundamentally nefarious way, they’ve invaded a sovereign country, there has to be a strong presumption that they will be pursuing investment in R&D, technology.”

“We must not restrict our ability to respond. But equally, yes, we must operate within international law. It is a balance to be struck,” added Mr. Cartlidge.

Earlier the witnesses were quizzed about the use of “synthetic data” in AI weapons development by Lord Mitchell, a Labour peer.

Lieut. Gen. Copinger-Symes explained why synthetic data was important.

He said: “If I were training a system to recognise what a cat looks like, I'd have the whole of the internet to trawl for data ... If we’re training a system to recognise a threat tank across the whole world, our existing data set to train on that might be slanted towards where we’ve operated previously, or where our intelligence-gathering has focused on.”

Lieut. Gen. Copinger-Symes went on: “So, for instance, you might be looking for a tank but all of your images of a tank are against a sort of European dark green background, rather than a desert background or a jungle background or, or an Arctic background. And to prevent that bias ... we might have to create synthetic data ... and that means the whole system is going to be far more effective at finding the enemy tank wherever it is in the world.”