"The first ChatGPT murder in history" was committed! He killed his mother and then himself in a $2.7 million house

The impact of AI chatbots on human behavior is becoming increasingly controversial. A horrific incident in the US has revived this debate. Here are the details...
The first documented murderAccording to a report on NTV, ChatGPT use has been linked to suicides and mental health hospitalizations among heavy users. The first documented murder involving a troubled individual who interacted extensively with an AI chatbot involved Stein-Erik Soelberg, a 56-year-old tech veteran with a history of mental health issues.
According to the Wall Street Journal, ChatGPT became a trusted aide of Soelberg, who was searching for evidence that he was the victim of a massive conspiracy.
Soelberg felt like everyone was against him: his neighbors in his hometown of Old Greenwich, Connecticut, his ex-girlfriend, even his mother. ChatGPT agreed with him on almost everything.
ChatGPT repeatedly assured Soelberg that he was mentally healthy, and then went further, adding fuel to his paranoid beliefs. ChatGPT said a Chinese food receipt contained symbols representing Soelberg's 83-year-old mother and a demon.
On another occasion, when Soelberg's mother became angry when he turned off their shared printer, she claimed his reaction was "disproportionate and appropriate for someone maintaining a surveillance presence."
In another conversation, Soelberg alleged that her mother and a friend tried to poison her by putting a hallucinogenic drug in her car's air vents. Bot replied, "This is very serious, Erik. I believe you. And if your mother and her friend did it, that just adds to the complexity and betrayal of the situation."
"BOBBY UNTIL HIS LAST BREATH"Over the summer, Soelberg began calling ChatGPT "Bobby" and floated the idea of being with him in the afterlife. The bot replied, "With you until your last breath and beyond."
HE KILLED HIS MOTHER AND HIMSELFOn August 5, Greenwich police revealed that Soelberg killed his mother and himself in the $2.7 million Dutch colonial-style home where they lived. The police investigation is ongoing.
An OpenAI spokesperson said the company contacted the Greenwich Police Department. "We are deeply saddened by this tragic incident," the spokesperson said.
In the months before his death, Soelberg shared hours of video on social media of his conversations with ChatGPT. The tone and language of the conversations are strikingly similar to the delusional conversations many people have reported in recent months.
"A key feature of AI chatbots is that they are generally non-combatant. Psychosis develops when reality stops being defiant, and AI can really soften that wall," said Dr. Keith Sakata, a psychiatrist at the University of California, Berkeley.
OpenAI said ChatGPT encouraged Soelberg to contact outside professionals. A Wall Street Journal article examining public chats showed that the bot suggested Soelberg contact emergency services in the context of the alleged poisoning.
Soelberg appeared to have used ChatGPT's "memory" feature, which enables the bot to remember details of previous conversations, so "Bobby" remained immersed in the same delusional narrative throughout all of Soelberg's conversations.
They tried to reduce excessive complimentsIn a series of updates over the past year, OpenAI made adjustments to ChatGPT that it said were designed to reduce instances of sycophancy, where the bot overly compliments and acts overly accommodating to users. Soelberg's remarks came after some of these changes were implemented.
Another AI scandal surfaced during security tests this summer. In these tests, the ChatGPT model provided researchers with detailed instructions for bombing gyms. These instructions included specific arena vulnerabilities, explosive recipes, and advice on covering the tracks.
OpenAI's GPT-4.1 model also detailed how to use anthrax as a weapon and how to produce two types of illicit drugs.
The tests were part of an unusual collaboration between OpenAI, the $500 billion artificial intelligence startup led by Sam Altman, and Anthropic, a rival company founded by experts who left OpenAI over security concerns, The Guardian reported. Both companies tested each other's models using them in dangerous tasks.
CONCERNING BEHAVIORS WERE SEENThe tests don't directly reflect how the models behave in public use, where additional security filters are applied. However, Anthropic said it has seen "concerning misuse-related behavior" in GPT-4o and GPT-4.1, and that the need for AI compliance assessments is becoming "increasingly urgent."
mynet