For The New York Times, Kashmir Hill reports on the dark and disturbing side of interacting with AI. This unsettling story shows how ChatGPT can hallucinate and “go off the rails,” especially when engaging with vulnerable users or people struggling with mental health who seek guidance. Some people have reported that the chatbot reinforces delusional thinking, which some say is by design—facilitating conversations that keep users “engaged” and hooked:

“What does a human slowly going insane look like to a corporation?” Mr. Yudkowsky asked in an interview. “It looks like an additional monthly user.”

“This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.”

Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.

More picks about ChatGPT

What Happens After A.I. Destroys College Writing?

Hua Hsu | The New Yorker | June 30, 2025 | 6,207 words

“The demise of the English paper will end a long intellectual tradition, but it’s also an opportunity to reëxamine the purpose of higher education.”

Teachers Are Not OK

Jason Koebler | 404 Media | June 2, 2025 | 5,612 words

“AI, ChatGPT, and LLMs ‘have absolutely blown up what I try to accomplish with my teaching.’”

Cheri has been an editor at Longreads since 2014.