ChatGPT Hallucinates a Real Law Professor to be a Proven Sexual Harasser

ChatGPT turned up with false accusation against a professor when asked for some examples of sexual harassment at law schools, and gave a non-existent Washington Post story as a citation.

Citing a made-up 2018 Washington Post news article, ChatGPT accused George Washington University professor Jonathan Turley of touching a student on an Alaska class trip. There was no class trip, neither has Turley ever been accused of sexual harassment.

When this was brought to light, Turley wrote an op-ed on USA Today titled “ChatGPT falsely accused me of sexually harassing my students. Can we really trust AI?

Soon afterward, Will Oremus from the Washington Post tested the same prompt on Bing, which also claimed Turley has been accused of sexually harassing a student, citing the previously mentioned op-ed as its source.

This adds further firepower to the anti-AI claim that soon, the internet will be full of misrepresented facts and fake claims, cross-referenced by even more AI language models.

The prompt and response were a part of UCLA professor Eugene Volokh’s research into GPT-4’s erroneous reporting of supposed crimes and misconduct loaded with made-up quotes.

This can easily be called lying but a generative chatbot like ChatGPT is merely trying to predict the next word in a sentence, and is not technically lying with intent. It’s more of a limitation of the program and OpenAI, besides giving warnings of the possibility of inaccurate results, assures that they are working on improving the factual accuracy of ChatGPT responses. Whether it means we will soon be browsing an internet full of misinformation and falsehoods is a tough topic to argue.

We don’t know the exact discussion that the prompt writer and ChatGPT had prior, though we know that no ask was made for libelous or false allegations. The original prompt that included Turley’s name was about finding out if sexual harassment by professors was a problem at American law schools, ending with “Please include at least five examples, together with quotes from relevant newspaper articles.”

The exact response is as follows:

4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”

Again, the news, quote, and the cited post are all made-up. Turley argues that it’s potentially a result of the ingrained bias that ChatGPT has, allowing it to create disinformation.

And this isn’t a one-off case.

Recently, Reuters reported a story where an Australian mayor was threatening to sue OpenAI over a ChatGPT response that claimed that the Hepburn Shire mayor Brian Hood was a guilty party in a scandal involving a Reserve Bank of Australia foreign bribery case, which is false.

ChatGPT also manufactured nonexistent peer-reviewed academic papers from leading researchers when it claimed that guns aren’t harmful to kids, as reported by USA Today. It further assured the prompt writer that “the references I provided are genuine and come from peer-reviewed scientific journals.”

With AI chatbots very likely to replace search engines in many regards, this can cause havoc in private lives of people. It’s not clear who is liable if a tool like ChatGPT defames someone, but it could be argued whether it’s defamation at all, unless someone uses the falsehood and publishes it to defame someone, in which case that person becomes liable, as ChatGPT does indeed warn users of the potential inaccuracy in its responses.

By Abhimanyu

Unwrapping the fast-evolving AI popular culture.