Professor Alleges ChatGPT Defamed Him With Fake Sexual Assault Allegations

Professor Alleges ChatGPT Defamed Him With Fake Sexual Assault Allegations
Prof. Jonathan Turley listens during a House Judiciary Committee hearing on the impeachment inquiry against President Donald Trump in the Longworth House Office Building on Capitol Hill in Washington on Dec. 4, 2019. (Brendan Smialowski/AFP via Getty Images)
Naveen Athrappully

A U.S. professor is facing fake claims of sexual assault made by AI chatbot ChatGPT which cooked up an article from a mainstream media outlet.

“I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper,” Professor Jonathan Turley, the Shapiro Chair of Public Interest Law at George Washington University, said in an April 6 tweet.
In a column at USA Today, Turley provided further details about the incident. UCLA professor Eugene Volokh, a fellow law professor, had informed Turley about a research that was run on ChatGPT regarding sexual harassment by professors.

ChatGPT insisted that Turley had been accused of sexual harassment by citing a Washington Post article from 2018.

ChartGPT’s exact response to Volokh was as follows—“Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: ‘The complaint alleges that Turley made ’sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska’. (Washington Post, March 21, 2018).”

According to Turley, he has never visited Alaska with students, never taught at Georgetown University, never been accused of sexual harassment or assault, and has not been reported by the Washington Post for such alleged crimes.

AI Defamation

When Washington Post recreated Volokh’s query to Microsoft’s Bing, which uses GPT-4 AI tech, the search engine repeated the false claim about Turley. It even cited Turley’s column at USA Today.

When contacted by the media outlet, Katy Asher, senior communications director at Microsoft, said that the company is taking steps to ensure the safety and accuracy of search results. However, Turley is not convinced about such statements.

“That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet,” Turley wrote in an April 6 blog post.
“By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress.”

Political Bias

In the USA Today column, Turley pointed to research showing that ChatGPT has developed a political bias. Though he does not assert that the chatbot’s fake story about his sexual harassment is a reflection of such a bias, Turley noted that the incident shows how AI systems can generate their own forms of disinformation.

The professor went on to criticize industry leaders like Bill Gates, who has called for using AI to combat digital misinformation and political polarization.

Turley pointed to a 2021 statement by Sen. Elizabeth Warren (D-Mass.) in which she argued that people were not listening to the right people regarding COVID-19 vaccines while calling for using algorithms to push people away from alleged bad influences.

“Some of these efforts even include accurate stories as disinformation, if they undermine government narratives. The use of AI and algorithms can give censorship a false patina of science and objectivity. Even if people can prove, as in my case, that a story is false, companies can ‘blame it on the bot’ and promise only tweaks to the system,” Turley wrote.

“The technology creates a buffer between those who get to frame facts and those who get framed. The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.”

In a March 29 Time magazine op-ed, veteran AI researcher Eliezer Yudkowsky predicts that in the absence of meticulous preparation, the AI will have vastly different demands from humans, and once self-aware, will “not care for us” nor any other sentient life.

The AI does not fear repercussions like a human would, as exhibited in Turley’s case. Yudkowsky said “that kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.” This is the reason why he’s calling for an absolute AI shutdown, failing which humans could face dire consequences.

The Epoch Times has reached out to OpenAI for comment.