US Regulators Say They Already Have Power to Crack Down on AI Misuse, Warn Firms to Comply With Laws

US Regulators Say They Already Have Power to Crack Down on AI Misuse, Warn Firms to Comply With Laws
An AI robot titled "Alter 3: Offloaded Agency," is pictured during a photocall to promote the exhibition entitled "AI: More than Human," at the Barbican Centre in London on May 15, 2019. (Ben Stansall/AFP via Getty Images)
Katabella Roberts
4/26/2023
Updated:
4/26/2023
0:00

A number of federal government agencies said on April 25 that they have the power to prevent unlawful “bias in algorithms and technologies” marketed as artificial intelligence (AI).

In a joint statement, four federal agencies—The Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC), and the U.S. Equal Employment Opportunity Commission (EEOC)—noted that artificial intelligence is fast becoming mainstream in society.

The agencies pointed to AI use by both private and public entities to make “critical decisions that impact individuals’ rights and opportunities, including fair and equal access to a job, housing, credit opportunities, and other goods and services.”

However, they stressed that while such tools provide technological advancement, “their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”

The four agencies then pointed to a number of steps they have already taken to protect American consumers from some of the negative aspects of increasingly more advanced AI, such as abusive use of the technology, repeat offenders’ use of AI technology, the utilization of algorithmic marketing and advertising and “Black box” algorithms, in which the internal workings are not clear to most people, including, in some cases, the developers themselves.

They went on to warn that companies that are already utilizing AI technology must do so in compliance with laws and regulations currently in place.

A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken on Feb. 8, 2023. (Florence Lo/Reuters)
A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken on Feb. 8, 2023. (Florence Lo/Reuters)

AI Tools Can ‘Turbocharge’ Fraud, Discrimination

“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” said FTC Chair Lina M. Khan. “Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”

The agencies said they are also looking into other ways to prioritize “digital redlining”—in which technology is used to further marginalize or discriminate against specific groups—including bias in algorithms and technologies marketed as AI.

As part of that effort, the CFPB is working with federal partners to protect homebuyers and homeowners from “algorithmic bias” in home valuations and appraisals, the agency said.

“As social media platforms, banks, landlords, employers, and other businesses that choose to rely on artificial intelligence, algorithms, and other data tools to automate decision-making and to conduct business, we stand ready to hold accountable those entities that fail to address the discriminatory outcomes that too often result,” said Assistant Attorney General Kristen Clarke of the Justice Department’s Civil Rights Division.

“This is an all-hands-on-deck moment and the Justice Department will continue to work with our government partners to investigate, challenge, and combat discrimination based on automated systems,” Clarke added.

FTC chair Khan also noted that the agency will take action against companies that illegally seek to block new entrants to AI markets, CNBC reports.

“A handful of powerful firms today control the necessary raw materials, not only the vast stores of data but also the cloud services and computing power, that startups and other businesses rely on to develop and deploy AI products,” Khan said. “And this control could create the opportunity for firms to engage in unfair methods of competition.”

Homeland Security Secretary Alejandro Mayorkas participates in an interview with Michael Isikoff from Yahoo News in Washington on June 14, 2021. (DHS/U.S. government)
Homeland Security Secretary Alejandro Mayorkas participates in an interview with Michael Isikoff from Yahoo News in Washington on June 14, 2021. (DHS/U.S. government)

DHS Forming AI Task Force

The joint statement comes as the Biden administration and other lawmakers are considering new AI regulations amid an explosion in the use of such technology, most notably OpenAI’s ChatGPT, which can generate human-like conversations and texts.
Goldman Sachs economists warned earlier this month that AI could render two-thirds of occupations across America partially automated, exposing the equivalent of 300 million full-time jobs to automation.
Other industry experts, including Elon Musk, who founded OpenAI with Sam Altman in 2015 but has since distanced himself from the company, have warned that such technology could pose disastrous risks to society and humanity.
Despite those concerns, earlier this week, the U.S. Department of Homeland Security (DHS) announced it intends to form a task force that will explore the use of AI to advance “critical homeland security missions” including enhancing the integrity of America’s supply chains, securing critical infrastructure, and countering the flow of fentanyl into the United States.

The task force will also explore implementing AI systems to help protect against China’s malign economic influence and advance safety, security, and economic prosperity in the Arctic and Indo-Pacific regions, according to DHS Security Alejandro Mayorkas.

“The profound evolution in the homeland security threat environment, changing at a pace faster than ever before, has required our Department of Homeland Security to evolve along with it,” Mayorkas said in a statement on April 21.

“We must never allow ourselves to be susceptible to ‘failures of imagination’’... ”We must instead look to the future and imagine the otherwise unimaginable, to ensure that whatever threats we face, our Department – our country – will be positioned to meet the moment,” he added.