What are AI hallucinations?

If American sci-fi novelist Philip K. Dick were alive today, he might have given his most famous work the title: “Do AIs Hallucinate Electric Sheep?”

Generative AI systems such as ChatGPT and Dall-E have gained a reputation for giving out information that appears plausible but is actually completely false, a phenomenon researchers call an AI hallucination.

This is “both a strength and a weakness”, said Nature. While it fuels their “celebrated” inventiveness, it also leads them to “sometimes blur truth and fiction”, adding something incorrect to an otherwise factual article, for example. All the while it is “totally confident” about what it has produced, said theoretical computer scientist Santosh Vempala. “They sound like politicians.”

What happens when AI is wrong?

The type of hallucination AIs generate depends on the system. Large language models (LLMs) like ChatGPT are “sophisticated pattern predictors”, said TechRadar, generating text by making predictions based on what word statistically follows the previous one.

Hallucinations occur when the system isn’t sure about a question or answer and “fills in gaps” based on similar examples it has been given. This leads to information that is “incorrect, made up or irrelevant“, said researchers Anna Choi and Katelyn Xiaoying Mei on The Conversation.

This can have serious consequences. In 2023, American lawyer Steven Schwartz used ChatGPT to help him write a legal brief to submit in court. But instead of finding legal precedents that would help his argument, the AI made up some cases and misidentified others. Schwartz was later fined after the opposing lawyers pointed out the inaccuracies.

  Is the body positivity era over?

ChatGPT’s hallucinations may also spell trouble for its maker, OpenAI. This month, Norwegian Arve Hjalmar Holmen filed a complaint against the company after the chatbot falsely claimed he had killed two of his children.

Holmen, who has never been charged nor convicted of any crime, had asked ChatGPT to answer the question: ” Who is Arve Hjalmar Holmen?”, to which it answered that he was a “Norwegian individual” who had “gained attention” when his sons were “tragically found dead in a pond near their home in Trondheim, Norway, in December 2020”. It added that he had received a 21-year prison sentence for their murder.

Digital rights group Noyb, acting on Holmen’s behalf, said OpenAI had violated data accuracy rules by ” knowingly allowing ChatGPT to produce defamatory results”.

Can you stop AI hallucinations?

There may not be an easy answer to solving AIs flights of fancy. Hallucinations are “fundamental” to how LLMs work, said Nature, which could make it impossible to eliminate them completely. In addition, said Choi and Mei on The Conversation, “novel” responses when a system is asked to be creative, such as when writing a story or generating an image, are “expected and desired”.

However, that does not mean companies cannot reduce the number of hallucinations a system has, or their effect, said TechTarget. Solutions could involve going back to the original material fed into the system to check for inaccuracies or using retrieval-augmented generation, allowing LLMs to access external, up-to-date information to improve accuracy.

Another possibility is automated reasoning to fact-check answers straight away, a system Amazon introduced to its generative AI offerings last December. Rather than “guessing or predicting” an answer, automated reasoning uses logic and problem-solving techniques to check its validity.

  Why is Musk targeting a Wisconsin Supreme Court race?

Until a solution is found, hallucinations will remain an ” inherent challenge” for LLMs, said TechRadar. The answer? “Fact-check everything.”

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *