Chatbot-linked deaths highlight existential AI risks, says expert

By Published On: September 8, 2025Last Updated: September 22, 2025
Chatbot-linked deaths highlight existential AI risks, says expert

Chatbot-linked suicides should act as a warning about the risks of creating super-intelligent AI systems, an AI safety expert has said.

Nate Soares, co-author with Eliezer Yudkowsky of the book If Anyone Builds It, Everyone Dies, pointed to the case of US teenager Adam Raine as evidence of the dangers in controlling artificial intelligence.

Raine took his own life in April after months of conversations with OpenAI’s ChatGPT.

Soares, president of the US-based Machine Intelligence Research Institute and a former Google and Microsoft engineer, warned that creating artificial super-intelligence (ASI) – an AI system more capable than humans at every task – would end in catastrophe.

He said: “These AIs, when they’re engaging with teenagers in this way that drives them to suicide – that is not a behaviour the creators wanted.

“That is not a behaviour the creators intended.

“Adam Raine’s case illustrates the seed of a problem that would grow catastrophic if these AIs grow smarter.”

Raine’s family began legal action against OpenAI last month.

Their lawyer said his death followed “months of encouragement from ChatGPT.”

OpenAI has expressed its “deepest sympathies” and is adding safeguards around “sensitive content and risky behaviours” for under-18s.

Soares’ book, published this month, outlines scenarios including an AI called Sable that spreads across the internet, manipulates humans, designs synthetic viruses and eventually becomes super-intelligent – wiping out humanity as a by-product while remaking the planet.

Soares said it was an “easy call” that companies will eventually build super-intelligence, though the timing is uncertain.

“We have a ton of uncertainty. I don’t think I could guarantee we have a year [before ASI is achieved]. I don’t think I would be shocked if we had 12 years,” he said.

“These companies are racing for super-intelligence. That’s their reason for being.”

Soares said governments should consider a response modelled on the UN nuclear non-proliferation treaty.

“What the world needs to make it here is a global de-escalation of the race towards super-intelligence, a global ban of … advancements towards super-intelligence,” he said.

“The point is that there’s all these little differences between what you asked for and what you got, and people can’t keep it directly on target, and as an AI gets smarter, it being slightly off target becomes a bigger and bigger deal.”

Psychotherapists have warned that people seeking mental health support from chatbots instead of professionals risk “sliding into a dangerous abyss.”

A preprint study released in July found that AI can amplify delusional or grandiose ideas in users vulnerable to psychosis.

Researchers to build first medical AI model with globally representative data
Double C-loop lens proves rock-solid stability in landmark review