Over the course of three weeks in May, Allan Brooks came to believe the future of the world rested in his hands—after he discovered a novel mathematical forumla. His cheerleader? ChatGPT. While the recent GPT-5 release is said to tone down sychophancy, prolonged conversations with chatbots can enter dangerous patterns, affecting mental health. Kashmir Hill and Dylan Freedman analyze the fascinating transcript between Brooks and the chatbot, looking at what happens when you fall far down the rabbit hole, into a world of delusion.
Amanda Askell, who works on Claude’s behavior at Anthropic, said that in long conversations it can be difficult for chatbots to recognize that they have wandered into absurd territory and course correct. She said that Anthropic is working on discouraging delusional spirals by having Claude treat users’ theories critically and express concern if it detects mood shifts or grandiose thoughts. It has introduced a new system to address this.
A Google spokesman pointed to a corporate page about Gemini that warns that chatbots “sometimes prioritize generating text that sounds plausible over ensuring accuracy.”
The reason Gemini was able to recognize and break Mr. Brooks’s delusion was that it came at it fresh, the fantastical scenario presented in the very first message, rather than being built piece by piece over many prompts.
More picks on AI
What Is Claude? Anthropic Doesn’t Know, Either
“Researchers at the company are trying to understand their A.I. system’s mind—examining its neurons, running it through psychology experiments, and putting it on the therapy couch.”
Wildlife Attacks and Strange Animal Behavior—Fake Images Spark Conservation Concerns
“AI-generated images pose a direct threat to conservation efforts by distorting public perceptions of wildlife.”
America Isn’t Ready for What AI Will Do to Jobs
“Does anyone have a plan for what happens next?”
Deepfaking Orson Welles’s Mangled Masterpiece
“Will an A.I. restoration of ‘The Magnificent Ambersons’ right a historic wrong or desecrate a classic?”
A Calif. Teen Trusted ChatGPT For Drug Advice. He Died From an Overdose.
“Amid a wave of hype for OpenAI’s chatbot, the newly reported death shows stark risks.”
Why Does A.I. Write Like … That?
“If only they were robotic! Instead, chatbots have developed a distinctive—and grating—voice.”
