Consumer Groups Call on Washington and Brussels to Regulate AI Tech

Consumer Groups Call on Washington and Brussels to Regulate AI Tech
President Joe Biden (L) and California Gov. Gavin Newsom takes part in an event discussing the opportunities and risks of Artificial Intelligence at the Fairmont Hotel in San Francisco on June 20, 2023. (Andrew Caballero-Reynolds/AFP via Getty Images)
Bryan Jung
6/21/2023
Updated:
6/21/2023
0:00

A coalition of consumer advocacy groups in the European Union and the United States called on their governments to develop regulations for generative artificial intelligence (AI) technology.

These groups are concerned about AI tech that power tools like ChatGPT are developing so quickly that consumers’ rights may be grievously harmed if state regulators fail to get involved.

The Transatlantic Consumer Dialogue (TACD), a coalition of consumer groups in the EU and North America, sent letters on June 21, to government leaders, out of concern that the rapid development and adoption of generative AI is rapidly outpacing legislative and regulatory action and could leave “leave consumers unprotected in the meantime.”

“Generative artificial intelligence (AI) raises serious concerns for consumers’ rights and safety,” said the coalition. “The use of this technology creates consumer challenges related to privacy, manipulation, personal integrity, scams, disinformation, and more. These services are also very resource-demanding, which has serious repercussions for the climate and environment.”

AI Safety Advocates Call for Immediate Protection for Consumers

TACD called for strengthened consumer protections to make AI technology “safe, reliable, and fair” so that consumers are not used as laboratory animals by Big Tech for new technologies.

They also called for a broad-based AI strategy that takes into account recent developments in tech, is centered on basic consumer rights, and provides strict guidelines for the use of generative AI in the public sector.

The coalition also demands “suitable future-proof regulations in instances where existing laws fall short.”

Privacy is one of the biggest concerns with generative AI since user data is often stored for model training.

Italy banned ChatGPT after announcing that OpenAI was not legally authorized to gather user data.

In addition to compromised user confidentiality, there is a risk of stored information falling into the wrong hands in the case of a security breach.

AI technology can also create human-level works at a mass scale, including fake and misleading articles, essays, papers and videos.

This is leading to fears over the wider dissemination of misinformation to levels never before seen.

There are also similar worries over “deepfakes,” which use generative AI to create fake videos, photos, and voice recordings, that use the image and likeness of a particular individual.

They have already been used to attack celebrities and politicians to spread defamatory information.

White House And Congress To Act On AI Tech Legislation

“Generative artificial intelligence systems are now widely used by consumers in the U.S. and beyond,” the coalition wrote in its letter to President Joe Biden.

“Although these systems are presented as helpful, saving time, costs, and labor, we are worried about serious downsides and harms they may bring about.”

The group wrote that generative AI systems are “incentivized to suck up as much data as possible to train the AI models, leading to the inclusion of personal data that may be irremovable once the sets have been established and the tools trained.”

TACD warned that content that is biased, discriminatory, or false could be used to train an AI system, making it more ingrained and disseminated widely.

They also raised concerns over large companies gaining monopolistic control of the AI space and noted that using tools like ChatGPT “requires enormous amounts of water and electricity, leading to heightened carbon emissions.”

TACD called on the White House to both enforce existing laws that are applicable to the generative AI space and implement new regulations that force companies and firms developing AI tools, to “adhere to transparent and reviewable obligations.”

So far, the Biden administration has only done a few things to address the issue, such as investments in AI research and proposing an AI bill of rights.

This week, both the White House and Congress said they were ready to start writing legislation to regulate artificial intelligence after officials warned about the dangers of AI technology.

“We need to manage the risks,” Biden said at a June 20 event to address AI concerns in San Francisco and promised there will be stronger actions to come.

A White House official told Yahoo News that the administration will develop “a process to rapidly develop decisive actions we can take over the coming weeks.”

Meanwhile, Senate Majority Leader Chuck Schumer (D-N.Y.) unveiled new details of the his AI-related legislation, which is heavily focused on the national security implications of the technology.

Schumer said that the Senate is seeing “a mix of urgency and humility; urgency because AI is developing so damn fast, and humility because AI is stupendously complex.”

EU Negotiating Terms on AI Rules in Europe

Across the Atlantic, the European Consumer Organisation (BEUC), a coalition of consumer organizations from thirteen EU countries, wrote a separate letter on June 20, following the release of a report by the Norwegian Consumer Council one of the member groups in the alliance.

The report raised similar concerns to those raised by the TACD, as well as warnings over the potential use of AI to manipulate or mislead consumers, misuse their data to violate privacy, automate human tasks, and exploit labor.

The EU is in the process of considering legislation to enact new AI regulations.

On June 14, the European Parliament agreed on a draft proposal for regulating artificial intelligence, called the “AI Act.”

Details of the AI regulations are now being discussed between the European Commission and the Council of the EU to eventually develop final legislation.

However, that process may take two years before a final compromise is reached, and the rules for AI take effect.