Media organisations need to have policies and processes in place around the generation, use, and labelling of images generated by artificial intelligence (AI), but only just over a third do, a new study led by RMIT University has found.
The study, which also involved Washington State University and the QUT Digital Media Research Centre, interviewed 20 photo editors (or people in related roles) from 16 public and commercial media organisations across Europe, Australia, and the United States about their perceptions of generative AI technologies in visual journalism.
Of the 16, five barred staff from using AI to generate images, three only barred photorealistic images, and the others allowed AI-generated images if the story was about AI.
Dangerous When AI-Generated Image Seems Real
Many were happy to use AI to generate illustrations that were not photorealistic, while others were comfortable using AI to create images when they lacked good existing stock images.“For example, existing stock images of bitcoin all look quite similar, so generative AI can help fill a gap in what is lacking in a stock image catalogue,” Mr. Thomson said.
The danger is when people aren’t aware of the image source.
“Audiences don’t always click through to learn more about the context and attribution of an image. We saw this happen when AI images of the Pope wearing Balenciaga went viral, with many believing it was real because it was a near-photorealistic image shared without context.”
“Only subtle visual cues revealed the pictures as unreal,” the study said. “The photorealism of the images, created in the text-to-image generator Midjourney, led many users to believe the images to be genuine.
“Other images, such as those envisioning the arrest of Donald Trump or purporting to depict the conflict between Israel and Palestine in Gaza, were reported on by or found themselves in news media, raising questions about how journalists perceive, use, and/or respond to this fast-evolving technology.”
The photo editors they interviewed revealed published images did not always indicate if it had gone through any sort of editing.
Detailed Policies Needed
Having policies and processes in place that detail how generative AI can be used in newsrooms could prevent incidents of mis- and disinformation, such as the altered image of Victorian MP Georgie Purcell, from happening.“More media organisations need to be transparent with their policies so their audiences can also trust that the content was made or edited in the ways the organisation says it is,” he said.
“Many of the policies I’ve seen from media organisations about generative AI are general and abstract. If a media outlet creates an AI policy, it needs to consider all forms of communication, including images and videos, and provide more concrete guidance.”
But the study stopped short of calling for a complete ban on AI use in newsrooms, saying it would be counter-productive and deprive media workers of uses such as recognising faces or objects and helping with captioning.
Mr. Thomson said Australia was lagging behind other jurisdictions on AI regulation, with the U.S. and the EU leading.
“There is ... a wait-and-see attitude where we are watching what other countries are doing so we can improve or emulate their approaches,” he said.
“I think it’s good to be proactive, whether that’s from government or a media organisation. If we can show we are being proactive to make the internet a safer place, it shows leadership and can shape conversations around AI.”