Skip to main content

Market Overview

Sam Altman-Led OpenAI Faces Legal Action Over ChatGPT 'Hallucination' That Accused Man Of Murder

Share:
Sam Altman-Led OpenAI Faces Legal Action Over ChatGPT 'Hallucination' That Accused Man Of Murder

A Norwegian man has taken legal action against Sam Altman-led OpenAI alleging that ChatGPT falsely claimed he murdered his children.

What Happened: Arve Hjalmar Holmen discovered the alleged fabrication after asking ChatGPT, "Who is Arve Hjalmar Holmen?" 

The AI chatbot responded with an invented story that he had murdered two sons, attempted to kill a third, and was sentenced to 21 years in prison.

"Some think that there is no smoke without fire—the fact that someone could read this output and believe it is true is what scares me the most," Holmen said, adding that the hallucination was damaging to his reputation, reported BBC.

The digital rights organization Noyb, which is representing Holmen in the complaint, argues that ChatGPT's response is defamatory and violates European data protection laws regarding the accuracy of personal information.

See Also: Apple's New Passwords App Left Users Exposed To Phishing Attacks For Months Due To Serious HTTP Flaw

"You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true," said Noyb lawyer Joakim Söderberg.

Microsoft Corp.-backed (NASDAQ:MSFT) OpenAI responded by saying the issue stemmed from an older version of ChatGPT and that newer models, including those with real-time web search, offer improved accuracy, the report noted.

"We continue to research new ways to improve the accuracy of our models and reduce hallucinations," the company said in a statement.

Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.

Why It's Important: The case highlights growing concerns over AI hallucinations—when generative models produce false yet convincing content.

Previously, Yann LeCun, Meta Platforms Inc.’s (NASDAQ:META) chief AI scientist and “Godfather of AI,” said that AI hallucinations stem from their autoregressive prediction process—each time the AI generates a word or token, there's a chance it might deviate from a logical response, gradually leading the conversation astray.

In 2023, Sundar Pichai, CEO of Alphabet Inc. (NASDAQ:GOOG) (NASDAQ:GOOGL) also acknowledged that AI technology, in general, is still grappling with hallucination problems.

Earlier this year, Apple Inc. (NASDAQ:AAPL) temporarily halted its Apple Intelligence news summary tool in the U.K. after it generated inaccurate headlines and misrepresented them as factual.

Google’s AI Gemini has also struggled with hallucinations—last year, it bizarrely advised using glue to attach cheese to pizza and claimed that geologists suggest people consume one rock per day.

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next:

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Photo courtesy: Shutterstock

 

Related Articles (AAPL + GOOG)

View Comments and Join the Discussion!

Posted-In: artificial intelligence benzinga neuro ChatGPT Consumer Tech KeyProj Sam AltmanNews Tech

Don't Miss Any Updates!
News Directly in Your Inbox
Subscribe to:
Benzinga Premarket Activity
Get pre-market outlook, mid-day update and after-market roundup emails in your inbox.
Market in 5 Minutes
Everything you need to know about the market - quick & easy.
Fintech Focus
A daily collection of all things fintech, interesting developments and market updates.
SPAC
Everything you need to know about the latest SPAC news.
Thank You

Thank you for subscribing! If you have any questions feel free to call us at 1-877-440-ZING or email us at vipaccounts@benzinga.com