AI-Generated Content May Need to Be Labelled in Australia

The government has been concerned about the ‘high risk’ application of AI and its impact on privacy.
AI-Generated Content May Need to Be Labelled in Australia
Cutting edge applications of Artificial Intelligence on display at the Artificial Intelligence Pavilion of Zhangjiang Future Park during a state organized media tour in Shanghai on June 18, 2021. (Andrea Verdelli/Getty Images)
1/16/2024
Updated:
1/16/2024
0:00

Growing public mistrust of Artificial Intelligence (AI) has prompted the Australian federal government to move to regulate the technology, including asking tech companies to watermark or indicate that a piece of content was created by AI.

More than 500 submissions were received by an inquiry into safe and responsible AI, prompting Industry and Science Minister Ed Husic to say that while the government wanted to ensure “low risk” uses of AI continued to develop, some applications needed new, stricter regulation.

Those “high-risk” AI systems include those used to “predict a person’s likelihood of recidivism, suitability for a job, or in enabling a self-driving vehicle,” while examples of “low-risk” AI use include filtering emails or managing minor business operations.

Tech giants Google and Meta, large banks, supermarkets, legal bodies, and universities all created submissions to the inquiry.

Some commentators and tech industry leaders called for a pause to evaluate the future direction of the technology amid concerns over mounting socioeconomic inequality and negative outcomes caused by bad data.

Striking a Balance Between Innovation and Safety

The government’s initial response to the inquiry—a 25-page report—cites research by McKinsey which suggests adopting AI and automation could boost the country’s GDP by up to $600 billion a year.

But Mr. Husic said the government was aiming to strike a balance between encouraging innovation and addressing the public’s concerns related to the safety and responsibility of AI systems.

The report cited surveys that showed only a third of Australians believe there are adequate safeguards around the design and development of AI.

“Australians understand the value of artificial intelligence but they want to see the risks identified and tackled,” he said, before the release of the report. “We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.”

“We want to make sure the government has modern laws for modern technology ... [and] that whatever we do in this space, that regulation can keep pace for future development as well.”

Mandatory Safeguards Being Considered

The government’s immediate plans are to set up an expert advisory group on the development of AI policy, including safety issues, along with developing a voluntary “AI safety standard” as a template for businesses wanting to integrate AI into their systems.

It has also pledged to start consulting with the tech industry on new transparency measures.

Federal Member for Chifley Ed Husic at a press conference at Parliament House in Canberra, Thursday, December 10, 2020. (AAP Image/Mick Tsikas)
Federal Member for Chifley Ed Husic at a press conference at Parliament House in Canberra, Thursday, December 10, 2020. (AAP Image/Mick Tsikas)
Mr. Husic said companies active in the space such as Google, Microsoft, and ChatGPT maker OpenAI would need to work with the government to ensure their AI products comply with Australian laws. According to the latest data, ChatGPT has about 180 million users globally.

The government has flagged it is also considering other mandatory safeguards including “pre-deployment risk and harm prevention testing” of new AI products, along with training standards for software developers.

The rapid development and deployment of artificial intelligence, and its widespread availability to anyone online, has raised a raft of issues that have lawmakers scrambling to keep up.

These include whether the use of AI to generate deepfakes constitutes misleading or deceptive conduct under consumer law, and whether AI used in healthcare could potentially breach privacy laws.

With AI developers using existing content to train generative AI models—usually without seeking permission from the original creators—questions have also emerged over copyright infringement and whether there should be legal remedies for those disadvantaged by such activity.

Rex Widerstrom is a New Zealand-based reporter with over 40 years of experience in media, including radio and print. He is currently a presenter for Hutt Radio.
Related Topics