In this piece for SFGATE, Lester Black and Stephen Council investigate how, over 18 months, 18-year-old Sam Nelson used ChatGPT to explore “how to take drugs, recover from them and plan further binges.” According to OpenAI’s own protocols, this shouldn’t have been possible. But it was—with tragic consequences. The article lays out just how easy it can be “to elicit problematic or dangerous information from the bot.”

Models like ChatGPT, which are known as “foundational” models, are very different. They try to answer almost any question sent their way, based on training data that could be untrustworthy. OpenAI has never provided full transparency on what information trained its flagship product, but there’s evidence that the company fed ChatGPT massive chunks of the internet, including a million hours of YouTube videos and years of Reddit threads. That means a random Reddit user’s post could inform ChatGPT’s next response.

“There is zero chance, zero chance, that the foundational models can ever be safe on this stuff,” Eleveld said. “I’m not talking about a 0.1% chance. I’m telling you it’s zero percent. Because what they sucked in there is everything on the internet. And everything on the internet is all sorts of completely false crap.”

More picks on AI

Recursive Resemblance

Patrick R. Crowley | Artforum | March 1, 2026 | 2,882 words

“On the feedback loops of mimesis, from the ancients to AI.”

Why Conservationists Are Making Rhinos Radioactive

Matthew Ponsford | MIT Technology Review | February 24, 2026 | 2,663 words

“Rapid DNA tests, x-ray fluorescence guns, and other technologies are being deployed in the fight against wildlife trafficking.”

What Is Claude? Anthropic Doesn’t Know, Either

Gideon Lewis-Kraus | The New Yorker | February 9, 2026 | 10,268 words

“Researchers at the company are trying to understand their A.I. system’s mind—examining its neurons, running it through psychology experiments, and putting it on the therapy couch.”