US Senators Unite to Release Bipartisan AI Standards Bill

Both sides of the aisle have united to curb any potential misuse of AI through a new bipartisan bill aimed at establishing frameworks for the techs use.
US Senators Unite to Release Bipartisan AI Standards Bill
Democratic presidential candidate Sen. Amy Klobuchar (D-Minn.) speaks during a stop at the Corner Sundry in Indianola, Iowa, on Dec. 6, 2019. (Charlie Neibergall/AP Photo)
Stephen Katte
11/15/2023
Updated:
11/15/2023
0:00

Senators from both major parties have united to introduce an artificial intelligence (AI) bill directing federal agencies to create standards providing transparency and accountability for AI tools.

According to a copy of the bipartisan bill released on Nov. 15, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 aims to establish frameworks bringing greater transparency, accountability, and security to the development and operation of AI, among other goals.

Sens. Amy Klobuchar (D-Minn.) and John Thune (R-S.D.) introduced the legislation. While four of their colleagues on the Senate Committee on Commerce, Science, and Transportation co-sponsored.

Sen. Klobuchar believes that while AI has the potential for “great benefits,” it also comes with “serious risks.” She called the new legislation “one important step” in ensuring laws keep up with the rapid rise of the tech while also “addressing potential harms” that could come from it.

“It will put in place common-sense safeguards for the highest-risk applications of AI—like in our critical infrastructure—and improve transparency for policymakers and consumers,” Sen. Klobuchar said.

Specifically, the Artificial Intelligence Research, Innovation, and Accountability Act aims to create enforceable testing and evaluation standards for the highest-risk AI systems by directing the Department of Commerce to issue standards for testing and evaluating AI systems.

U.S. Senator John Thum, represents South Dakota, May 2022. (Courtesy, U.S. Senate)
U.S. Senator John Thum, represents South Dakota, May 2022. (Courtesy, U.S. Senate)

The Commerce Department will be tasked with submitting a five-year plan for testing and certifying critical-impact AI. The department would also be required to update the plan regularly. Companies would have to submit transparency and risk assessment reports to the Commerce Department before deploying critical-impact AI systems.

The National Institute of Standards and Technology (NIST) would also be directed to develop standards for the authenticity of online content to provide consumers with clearer distinctions between human and AI-generated content. Among the NIST’s other tasks would be developing recommendations for technical, risk-based guardrails on AI systems. New definitions for terms such as generative and high-impact AI systems and a clear distinction between developer and deployer of AI systems are part of the bill as well.

Sen. Thune acknowledged the potential for AI to revolutionize health care, agriculture, logistics, and countless other industries, but he also believes there is a need to ensure a standard ruleset for the industry to follow.
“As this technology continues to evolve, we should identify some basic rules of the road that protect Americans and consumers, foster an environment in which innovators and entrepreneurs can thrive, and limit government intervention,” he said.

Deepfakes a Growing Concern as AI Use Increases

Ms. Klobuchar has been spearheading the effort to address the threat of misleading AI-generated content for a while now. In early October, she wrote to Meta founder Mark Zuckerberg about what was being done to protect political figures from “deepfakes” and the ramifications that could follow.
Deepfakes are a form of video tech that creates computer-generated images, which are often indistinguishable from actual footage. Deepfakes have been used for harmless purposes, such as de-aging movie stars or creating fan-made videos of popular film series like Star Wars.
However, there is also a darker side to the tech, which could be used to manipulate events on a global scale, unlike anything we have ever seen. Last year, a fake and heavily manipulated video depicting Ukrainian President Volodymyr Zelenskyy telling his soldiers to lay down their weapons and surrender in the war with Russia was circulated.
A phone displaying a statement from the head of security policy at Meta in front of a screen displaying a deepfake video of Ukrainian President Volodymyr Zelenskyy calling on his soldiers to lay down their weapons, in Washington on Jan. 30, 2023. (Olivier Douliery/AFP via Getty Images)
A phone displaying a statement from the head of security policy at Meta in front of a screen displaying a deepfake video of Ukrainian President Volodymyr Zelenskyy calling on his soldiers to lay down their weapons, in Washington on Jan. 30, 2023. (Olivier Douliery/AFP via Getty Images)

The video was debunked, but if it hadn’t been, the consequences could have been devasting for the Ukrainian war effort. Even if the video was considered legitimate for only a few hours, Russian military forces could have gained an enormous advantage that could have changed the course of the war.

Earlier this month, Sen. Klobuchar and Sen. Susan Collins called on the Federal Trade Commission and the Federal Communications Commission to increase efforts to prevent AI voice cloning scams.
The 2024 election has already seen AI-generated ads making waves. In April, the Republican National Committee aired an ad using AI to paint a speculative picture of what the future might look like if President Joe Biden’s re-election bid is successful. It featured fake yet convincing visuals, including boarded-up storefronts, military patrols, and immigrant-related turmoil.

Tech Companies Already Taking Steps Around AI Use

Meta, the company that owns Facebook, has already taken some steps to reign in the use of AI during the election. Earlier this month, the tech giant barred political campaigns and advertisers in regulated industries from utilizing its new generative AI advertising products.

The new policy was publicly disclosed through updates on the company’s help center and is aimed at curbing the spread of election misinformation in the run-up to the presidential elections.

YouTube has also announced plans in the coming months to introduce updates that will inform viewers when the content they’re seeing is synthetically created using AI. In a Nov. 14 blog post by YouTube Vice Presidents of Product Management, Jennifer Flannery O'Connor and Emily Moxley said AI has great potential for creativity on the video platform. However, they also believe AI will “introduce new risks and will require new approaches.”

AI (Artificial Intelligence) letters and robot miniature, on June 23, 2023. (Dado Ruvic/Reuters)
AI (Artificial Intelligence) letters and robot miniature, on June 23, 2023. (Dado Ruvic/Reuters)

As a result, and in the interests of maintaining a “healthy ecosystem of information on YouTube,” creators will need to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.

“When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material,” the blog post says.

“For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”

Ms. Flannery O'Connor and Moxley stress these new changes will be especially important in videos discussing sensitive topics, such as elections, ongoing conflicts and public health crises.

Content creators who consistently choose not to disclose whether their videos are AI-generated could be subject to content removal, suspension from the YouTube Partner Program, or other penalties.