“I don’t carry a cult tailored to agree with me in my pocket, but I do carry Claude,” philosopher Lucy Osler tells Kristen French in this Nautilus interview. Together, they explore how personalized chatbots can become intimate echo chambers that place us at the center of a private, self-reinforcing world. This dynamic, Osler argues, can generate “mutual hallucinations”—shared fantasies that emerge from the conversation itself—raising new concerns about AI and mental health.
The woman, who had no previous history of psychosis, was hospitalized and treated for psychosis at the University of California, San Francisco, where researchers documented her diagnosis and treatment. It was the first case of AI-associated psychosis reported in a peer-reviewed journal, but it was just one of many reported in the press, and it will surely not be the last. In 2021, one man attempted to assassinate Queen Elizabeth at Windsor Castle, a mission encouraged by his AI Replika companion. More recently, a number of teenagers have died by suicide, with the support of their chatbot pals.
More picks on AI
Teacher v Chatbot: My Journey Into the Classroom in the Age Of AI
“Throwing AI into the mix felt like downing a coffee in the middle of a panic attack.”
Recursive Resemblance
“On the feedback loops of mimesis, from the ancients to AI.”
Why Conservationists Are Making Rhinos Radioactive
“Rapid DNA tests, x-ray fluorescence guns, and other technologies are being deployed in the fight against wildlife trafficking.”
What Is Claude? Anthropic Doesn’t Know, Either
“Researchers at the company are trying to understand their A.I. system’s mind—examining its neurons, running it through psychology experiments, and putting it on the therapy couch.”
Wildlife Attacks and Strange Animal Behavior—Fake Images Spark Conservation Concerns
“AI-generated images pose a direct threat to conservation efforts by distorting public perceptions of wildlife.”
America Isn’t Ready for What AI Will Do to Jobs
“Does anyone have a plan for what happens next?”
