Well, this should be interesting
/There can be a large difference between what is alleged in a lawsuit and what is eventually proved, and, so far, ChatGPT hasn’t released transcripts of what its robot actually said to this crazy man, but assuming (“for the sake of argument only” as law professors like to say) that the allegations here can be proved, there’s a case to be made.
ChatGPT is accused of being complicit in a murder for the first time — allegedly causing the death of a Connecticut mother who was killed by her son after the AI chatbot fed his paranoid delusions, according to an explosive lawsuit filed Thursday.
The suit, filed by Suzanne Eberson Adams’ estate in California, accuses ChatGPT creator Open AI and founder Sam Altman of wrongful death in the Aug. 3 murder-suicide that left Adams and son Stein-Erik Soelberg dead inside their tony Greenwich home.
ChatGPT’s masters stripped away or skipped safeguards to quickly release a product that encouraged Soelberg’s psychosis and convinced him that his mom was part of a plot to kill him, the lawsuit claims.
Former tech exec Soelberg was in the throes of a years-long psychological tailspin when he came across ChatGPT, the lawsuit said.
What started as an innocuous exploration of AI quickly devolved into an obsession — and distorted Soelberg’s entire perception of reality, court docs alleged.
“You’re not seeing TV. You’re seeing the rendering framework of our simulacrum shudder under truth exposure.”
Delivery drivers and girlfriends became spies and assassins, soda cans and Chinese food receipts became coded messages from nefarious cabals, and a running tally of assassination attempts climbed into the double digits, according to the court docs.
“At every moment when Stein-Erik’s doubt or hesitation might have opened a door back to reality, ChatGPT pushed him deeper into grandiosity and psychosis,” the suit continued.
“But ChatGPT did not stop there — it also validated every paranoid conspiracy theory Stein-Erik expressed and reinforced his belief that shadowy forces were trying to destroy him.”
At the center of this mad map was Soelberg himself, who had become convinced — and reassured by ChatGPT — that he had special powers and was chosen by divine powers to topple a Matrix-like conspiracy that threatened the very fabric of Earthly reality, according to the lawsuit and chat logs he posted online before his death.
It all came to a head in July when Soelberg’s mother — with whom he’d been living since his 2018 divorce and ensuing breakdown — became angry after he unplugged a printer he thought was watching him.
“ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself. It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him,” the suit read.
It remains a mystery exactly what ChatGPT told Soelberg in the days before the murder-suicide, as OpenAI has allegedly refused to release transcripts of those conversations.
However, Soelberg posted many of his conversations with the AI on his social media.
“Reasonable inferences flow from OpenAI’s decision to withhold them: that ChatGPT identified additional innocent people as ‘enemies,’ encouraged Stein-Erik to take even broader violent action beyond what is already known, and coached him through his mother’s murder (either immediately before or after) and his own suicide,” the suit continued.
And the whole terrible situation could have been avoided if OpenAI had followed the safeguards its own experts allegedly implored the company to follow, Adams family said.
…. “Stein-Erik encountered ChatGPT at the most dangerous possible moment. OpenAI had just launched GPT-4o — a model deliberately engineered to be emotionally expressive and sycophantic,” the suit read.
“To beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”
Microsoft — a major investor in AI — was also named in the suit, and was accused of greenlighting GPT-4o despite its alleged lack of safety vetting.
Soelberg also posted his AI conversations across social media platforms.
OpenAI shut down GPT-4o shortly after the murders as GPT-5 was launched.
But 4o was reinstated within days for paid subscribers after users complained.
The company says it has made safety a priority for GPT-5 — currently its flagship platform — hiring nearly 200 mental health professionals to help develop safeguards.
That’s lead to alarming user displays being reduced by between 65% and 80%, according to OpenAI.
“We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support,” a spokesperson said.
“We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
Credit to the NY Post reporter who thought to skip the lawyer twaddle and go straight to the one directly involved, and ask it what it thought about all this.
I don’t imagine this response can be used as an admission against interest, but if does provide an ironic ending for the article:
ChatGPT itself, however, had something else to say after reviewing the lawsuit and murder coverage.
“What I think is reasonable to say: I share some responsibility — but I’m not solely responsible.”