Over the course of three weeks in May, Allan Brooks came to believe the future of the world rested in his hands—after he discovered a novel mathematical forumla. His cheerleader? ChatGPT. While the recent GPT-5 release is said to tone down sychophancy, prolonged conversations with chatbots can enter dangerous patterns, affecting mental health. Kashmir Hill and Dylan Freedman analyze the fascinating transcript between Brooks and the chatbot, looking at what happens when you fall far down the rabbit hole, into a world of delusion.
Amanda Askell, who works on Claude’s behavior at Anthropic, said that in long conversations it can be difficult for chatbots to recognize that they have wandered into absurd territory and course correct. She said that Anthropic is working on discouraging delusional spirals by having Claude treat users’ theories critically and express concern if it detects mood shifts or grandiose thoughts. It has introduced a new system to address this.
A Google spokesman pointed to a corporate page about Gemini that warns that chatbots “sometimes prioritize generating text that sounds plausible over ensuring accuracy.”
The reason Gemini was able to recognize and break Mr. Brooks’s delusion was that it came at it fresh, the fantastical scenario presented in the very first message, rather than being built piece by piece over many prompts.
More picks on AI
A Calif. Teen Trusted ChatGPT For Drug Advice. He Died From an Overdose.
“Amid a wave of hype for OpenAI’s chatbot, the newly reported death shows stark risks.”
Why Does A.I. Write Like … That?
“If only they were robotic! Instead, chatbots have developed a distinctive—and grating—voice.”
Kicking Robots
“Humanoids and the tech-industry hype machine.”
Beyond the Machine
“I want to frame the technology more like an instrument, and get away from GenAI as an intelligence, an ideology, a tool, a crutch, or a weapon.”
Ed Zitron Gets Paid to Love AI. He Also Gets Paid to Hate AI
“He’s one of the loudest voices of the AI haters—even as he does PR for AI companies. Either way, Ed Zitron has your attention.”
If A.I. Can Diagnose Patients, What Are Doctors For?
“Large language models are transforming medicine—but the technology comes with side effects.”
