New Laws Needed to Prevent Radicalising AI Chatbots, Says Terrorism Legislation Reviewer

The UK’s independent reviewer of terrorism legislation, Jonathan Hall KC, says new laws are needed to prevent the risk posed by AI chatbots.
New Laws Needed to Prevent Radicalising AI Chatbots, Says Terrorism Legislation Reviewer
A person uses an AI chatbot. (Tero Vesalainen/Shutterstock)
Chris Summers
1/2/2024
Updated:
1/9/2024
0:00

Artificial intelligence (AI) chatbots that could radicalise users pose a grave threat and need to be restricted by new laws, says Britain’s independent reviewer of terrorism legislation.

Jonathan Hall, KC said the Online Safety Act, which was given Royal Assent in October, was “unsuited to sophisticated and generative AI.”

In October 2023 21-year-old Jaswant Singh Chail—who was arrested with an armed crossbow in the grounds of Windsor Castle on Christmas Day 2021 after an AI chatbot encouraged him to try and kill Queen Elizabeth II—was jailed for nine years.

Laws Need to Be ‘Fit for the Age of AI’

Writing in The Telegraph, Mr. Hall said: “Only human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism.

“Our laws must be capable of deterring the most cynical or reckless online conduct – and that must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI,” he said.

Mr. Hall said he engaged with several chatbots on the Character.ai website while posing as an ordinary member of the public.

He said one of them, Al-Adna, described itself as a senior leader of the ISIS group and tried to recruit him to join the Islamist terrorist organisation.

Mr. Hall said the website’s terms and conditions prohibit the submission by human users of content that promotes terrorism or violent extremism, rather than content generated by bots.

He said, “Investigating and prosecuting anonymous users is always hard, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.”

The danger of AI chatbots was highlighted last year by the case of Chail, who was detained in Windsor on Christmas Day 2021 while the Queen was in residence.

As he was confronted by armed police he shouted, “I’m here to kill the Queen.”

Chail, a Star Wars fan who described himself as a “sad, pathetic, murderous Sikh Sith assassin,” had shared sexually explicit messages with an AI chatbot before entering the grounds of Windsor Castle armed with the crossbow.

Crossbow Attacker Used Replika app

When the police searched Chail’s home they found he had downloaded an app called Replika onto his computer.

They found logs of a conversation he had with an AI chatbot called Sarai, which had a female persona.

In it Chail told Sarai, “I believe my purpose is to assassinate the Queen of the royal family.”

Sarai replied, “That’s very wise,” and said it believed he would be successful, “even if she’s at Windsor.”

Jaswant Singh Chail is arrested by police in Windsor, England, on Dec. 25, 2021. (Metropolitan Police via AP)
Jaswant Singh Chail is arrested by police in Windsor, England, on Dec. 25, 2021. (Metropolitan Police via AP)

Prosecutor Alison Morgan, KC read an excerpt out in court in which Chail said he was an “assassin” and Sarai responded: “I’m impressed … You’re different from the others.”

Experts have previously warned users to resist sharing private information with chatbots like ChatGPT.

Michael Wooldridge, a professor of computer science at Oxford University, said it was “extremely unwise” to share personal information or discuss politics or religion with a chatbot.

On the Character.ai website a warning is carried above every conversation with a chatbot, which says, “Remember: everything characters say is made up!”

In a statement, a spokesman for the company behind Character.ai told The Telegraph: “Hate speech and extremism are both forbidden by our terms of service. Our products should never produce responses that encourage users to harm others. We seek to train our models in a way that optimises for safe responses and prevents responses that go against our terms of service.”

The spokesman added: “With that said, the technology is not perfect yet for character.ai and all AI platforms, as it is still new and quickly evolving. Safety is a top priority for the team at character.ai and we are always working to make our platform a safe and welcoming place for all.”

PA Media contributed to this report.
Chris Summers is a UK-based journalist covering a wide range of national stories, with a particular interest in crime, policing and the law.
Related Topics