US, UK Unveil International Agreement for Developing Artificial Intelligence Systems

Australia, Canada, Germany, Italy and Japan are among the other 16 countries featured in the new AI agreement.
US, UK Unveil International Agreement for Developing Artificial Intelligence Systems
Four Twitter accounts apparently generated by artificial intelligence software are displayed on a laptop in Helsinki on June 12, 2023. The fake profiles, posing as American environmentalists, posted tweets in support of the United Arab Emirates, its handling of the COP28 climate summit, and the role of its COP28 chief, oil executive Sultan Al Jaber, in promoting climate action. (Olivier Morin/AFP)
Stephen Katte
11/28/2023
Updated:
12/3/2023
0:00

An international agreement to ensure artificial intelligence (AI) systems remain safe from rogue actors and assist developers with cybersecurity decisions has been unveiled jointly by agencies in the United States and the United Kingdom.

The 20-page document, published by both the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the UK’s National Cyber Security Centre (NCSC), has a set of guidelines for tech companies to follow while creating AI products and services.
In October, U.S. President Joe Biden issued an executive order directing DHS to promote the adoption of AI safety standards worldwide. As part of the president’s order, DHS was asked to protect U.S. networks and critical infrastructure and reduce the risks of AI in creating weapons of mass destruction.

Broken down into four sections, the guidelines have recommendations for developers to follow during each step of the design process, from AI system design and development to its deployment and maintenance. Each section highlights considerations and mitigations to help reduce the cybersecurity risk to an organizational AI system development process.

Also mentioned in the guidelines are ways to combat threats to AI systems, the protection of AI-related assets such as models and data, the responsible release of AI systems, and the importance of monitoring following release. The guidelines don’t appear legally binding though, and at this stage, they are only recommendations for tech companies developing AI.

Following the guidelines is vital for tech companies to harness the benefits while addressing the potential harms of this “pioneering technology,” Homeland Security Secretary Alejandro Mayorkas said in an accompanying statement.

“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time,” Mr. Mayorkas said. “Cybersecurity is key to building AI systems that are safe, secure, and trustworthy.”

“The guidelines jointly issued today by CISA, NCSC, and our other international partners provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core,” he added.

Among the other 16 countries featured in the agreement are Australia, Canada, Chile, the Czech Republic, Estonia, Germany, Israel, Italy, Japan, Nigeria, Poland, and Singapore.

Cautious About AI

Since the introduction of OpenAI’s ChatGPT on Nov. 30, 2022, the race to develop AI systems has accelerated. However, lawmakers and some tech leaders have expressed concerns over the risk of uncontrolled AI development.
Tesla CEO and OpenAI co-founder Elon Musk has issued multiple warnings about the potential dangers of AI and its potential for “civilizational destruction.”
U.S. Securities and Exchange Commission Chair Gary Gensler also sounded the alarm over AI in October. He said he believes that a financial crisis stemming from the widespread use of AI is “nearly unavoidable” without swift intervention by regulators.
The CIA has released its own “Roadmap for Artificial Intelligence” to promote the beneficial uses of AI to enhance cybersecurity capabilities and ensure AI systems are protected from cyber-based threats.
Senators from both major parties have also united to introduce an AI proposal directing federal agencies to create standards that provide transparency and accountability for AI tools.
According to a copy of the bipartisan measure released on Nov. 15, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 aims to establish frameworks bringing greater transparency, accountability, and security to the development and operation of AI, among other goals.
AI-generated deepfakes have been flagged as particularly concerning. The video tech creates computer-generated images often indistinguishable from actual footage. Deepfakes have already been used in the film industry, such as for de-aging actors.
A phone screen displaying a statement from the head of security policy at Meta with a fake video (R) of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons is shown in the background in Washington. (Olivier Douliery/AFP via Getty Images)
A phone screen displaying a statement from the head of security policy at Meta with a fake video (R) of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons is shown in the background in Washington. (Olivier Douliery/AFP via Getty Images)
However, there is also a darker side to the tech, which might be used to manipulate events on a global scale. Last year, a fake and heavily manipulated video depicting Ukrainian President Volodymyr Zelenskyy telling his soldiers to lay down their weapons and surrender in the war with Russia was circulated.
The video was debunked, but if it hadn’t been, the consequences could have been devastating for the Ukrainian war effort. Even if the video had been considered legitimate for only a few hours, Russian military forces could have gained an enormous advantage that could have changed the war in Europe.

Tech Companies Already Reining in AI Use

Some major social media companies have already taken action to control the use of AI in the short term.
Meta, the company that owns Facebook, has taken some steps to rein in the use of AI during the 2024 presidential election. Earlier this month, the tech giant barred political campaigns and advertisers in regulated industries from using its new generative AI advertising products.

YouTube has also announced plans in the coming months to introduce updates that will inform viewers when the content they’re seeing is created using AI.

Most major tech companies plan to develop AI products and services or have already released them. Microsoft, Amazon, and Google are among the most prominent. Because the tech is relatively new, it remains to be seen what safeguards to prevent misuse, if any, will be in place.