Chinese AI Must Be Banned or Regulated: Report

Chinese AI Must Be Banned or Regulated: Report
AI (Artificial Inteligence) security cameras with facial recognition technology are seen at the 14th China International Exhibition on Public Safety and Security at the China International Exhibition Center in Beijing on Oct. 24, 2018. (Nicolas Asfouri/AFP via Getty Images)
8/1/2023
Updated:
8/1/2023
0:00

Democratic nations around the world need to develop protocols or even outright bans on Chinese-developed AI products and software, according to an expert report assessing the risk around the rapidly developing technology.

The report, “De-Risking Authoritarian AI,“ from Simeon Gilding of the Australian Strategic Policy Institute (ASPI) warns of possible ”remote, large-scale foreign interference, espionage and sabotage through AI-enabled industrial and consumer goods and services.”
“If we’re wary about AI, we should be even more circumspect about AI-enabled products and services from authoritarian countries that share neither our values nor our interests,” Mr. Gilding wrote. 
“AI systems are embedded in our homes, workplaces, and essential services. More and more, we trust them to operate as advertised, always be there for us, and keep our secrets.”

New AI Leaves Democracies Open to Beijing’s Interference

However, these products, which include common virtual assistants like Siri or Alexa, or even customer service chatbots, leave countries open to manipulation from the Chinese Communist Party (CCP).

Mr. Gilding argues that governments should look deeply into three types of exported Chinese AI technology, the first being products and services, which include infrastructure that could lead to surveillance or data theft.

Amazon's Echo Spot device powered by its Alexa digital assistant at the Consumer Electronics Show in Las Vegas on Jan. 11, 2019. (Robert Lever/AFP via Getty Images)
Amazon's Echo Spot device powered by its Alexa digital assistant at the Consumer Electronics Show in Las Vegas on Jan. 11, 2019. (Robert Lever/AFP via Getty Images)

Second, governments should be concerned about AI-enabled technology like TikTok that could allow foreign interference to occur, and third, large language AI systems.

But Mr. Gilding warns that addressing these technologies’ access to democracies is not as easy as addressing their threat to telecommunications networks which is a strategic vulnerability for all digital technologies.

A general prohibition on all Chinese AI-enabled technology would be extremely costly and disruptive, he wrote.

“Many businesses and researchers in the democracies want to continue collaborating on Chinese AI-enabled products because it helps them to innovate, build better products, offer cheaper services, and publish scientific breakthroughs,” he said.

Additionally, AI technology and its pervasiveness makes a ban difficult because a Chinese-backed “constellation of technologies and techniques” is widely used.

Chatbots are most often used for low-level customer service and sales task automation, but researchers have been trying to make them perform more sophisticated tasks such as therapy. (Tero Vesalainen/Shutterstock)
Chatbots are most often used for low-level customer service and sales task automation, but researchers have been trying to make them perform more sophisticated tasks such as therapy. (Tero Vesalainen/Shutterstock)

“This is a more subtle and complex problem than deciding whether to ban Chinese companies from participating in 5G networks,” Mr. Gilding said.

He uses the example that while PRC-made cameras and drones in sensitive locations are a legitimate concern, the ability to cripple supply chains by accessing ship-to-shore cranes would be devastating.

“So the task is to identify where on the spectrum between national-security threat and moral panic each of these products sits. And then pick the fights that really matter,” he argues.

‘Red Teams’ Needed to Deal With Threats

The report argues democratic governments need to introduce a three-stage framework where they identify, triage, and manage all AI technology emerging from China.

Key to the process is to establish government-run “red teams” of cyber experts drawn from intelligence and defence fields.

A red team is a colloquial term for a group of cybersecurity experts who are authorised to emulate a hypothetical adversary’s attack or exploitation of capabilities against an entity, be it a business or government agency.

“This is a real-world test because all intelligence operations cost time and money, and some points of presence in a target ecosystem offer more scalable and effective opportunities than others,” Mr. Gilding said.

The final stage will be setting up regulatory measures to prohibit Chinese AI-enabled technology in some parts of the network, including procurement bans and mitigation strategies.

Victoria Kelly-Clark is an Australian based reporter who focuses on national politics and the geopolitical environment in the Asia-pacific region, the Middle East and Central Asia.
twitter
Related Topics