Media Lacks Policies on AI-Generated Images: Study

Media outlets need clear policies and processes around the use of AI-generated imagery, a study led by RMIT University says.
Media Lacks Policies on AI-Generated Images: Study
A photo shows a frame of a video generated by a new artificial intelligence tool, dubbed "Sora", unveiled by the company OpenAI, in Paris on February 16, 2024. (Stefano Rellandini/AFP via Getty Images)
4/15/2024
Updated:
4/15/2024
0:00

Media organisations need to have policies and processes in place around the generation, use, and labelling of images generated by artificial intelligence (AI), but only just over a third do, a new study led by RMIT University has found.

The study, which also involved Washington State University and the QUT Digital Media Research Centre, interviewed 20 photo editors (or people in related roles) from 16 public and commercial media organisations across Europe, Australia, and the United States about their perceptions of generative AI technologies in visual journalism.

Of the 16, five barred staff from using AI to generate images, three only barred photorealistic images, and the others allowed AI-generated images if the story was about AI.

“Photo editors want to be transparent with their audiences when generative AI technologies are being used, but media organisations can’t control human behaviour or how other platforms display information,” said lead researcher and RMIT senior lecturer TJ Thomson.

Dangerous When AI-Generated Image Seems Real

Many were happy to use AI to generate illustrations that were not photorealistic, while others were comfortable using AI to create images when they lacked good existing stock images.

“For example, existing stock images of bitcoin all look quite similar, so generative AI can help fill a gap in what is lacking in a stock image catalogue,” Mr. Thomson said.

The danger is when people aren’t aware of the image source.

“Audiences don’t always click through to learn more about the context and attribution of an image. We saw this happen when AI images of the Pope wearing Balenciaga went viral, with many believing it was real because it was a near-photorealistic image shared without context.”

“Only subtle visual cues revealed the pictures as unreal,” the study said. “The photorealism of the images, created in the text-to-image generator Midjourney, led many users to believe the images to be genuine.

“Other images, such as those envisioning the arrest of Donald Trump or purporting to depict the conflict between Israel and Palestine in Gaza, were reported on by or found themselves in news media, raising questions about how journalists perceive, use, and/or respond to this fast-evolving technology.”

The photo editors they interviewed revealed published images did not always indicate if it had gone through any sort of editing.

“[That] can lead to news sites sharing AI images without knowing, impacting their credibility,” Mr. Thomson said.

Detailed Policies Needed

Having policies and processes in place that detail how generative AI can be used in newsrooms could prevent incidents of mis- and disinformation, such as the altered image of Victorian MP Georgie Purcell, from happening.

“More media organisations need to be transparent with their policies so their audiences can also trust that the content was made or edited in the ways the organisation says it is,” he said.

“Many of the policies I’ve seen from media organisations about generative AI are general and abstract. If a media outlet creates an AI policy, it needs to consider all forms of communication, including images and videos, and provide more concrete guidance.”

But the study stopped short of calling for a complete ban on AI use in newsrooms, saying it would be counter-productive and deprive media workers of uses such as recognising faces or objects and helping with captioning.

Mr. Thomson said Australia was lagging behind other jurisdictions on AI regulation, with the U.S. and the EU leading.

“There is ... a wait-and-see attitude where we are watching what other countries are doing so we can improve or emulate their approaches,” he said.

“I think it’s good to be proactive, whether that’s from government or a media organisation. If we can show we are being proactive to make the internet a safer place, it shows leadership and can shape conversations around AI.”

Rex Widerstrom is a New Zealand-based reporter with over 40 years of experience in media, including radio and print. He is currently a presenter for Hutt Radio.
Related Topics