In this piece for SFGATE, Lester Black and Stephen Council investigate how, over 18 months, 18-year-old Sam Nelson used ChatGPT to explore “how to take drugs, recover from them and plan further binges.” According to OpenAI’s own protocols, this shouldn’t have been possible. But it was—with tragic consequences. The article lays out just how easy it can be “to elicit problematic or dangerous information from the bot.”
Models like ChatGPT, which are known as “foundational” models, are very different. They try to answer almost any question sent their way, based on training data that could be untrustworthy. OpenAI has never provided full transparency on what information trained its flagship product, but there’s evidence that the company fed ChatGPT massive chunks of the internet, including a million hours of YouTube videos and years of Reddit threads. That means a random Reddit user’s post could inform ChatGPT’s next response.
“There is zero chance, zero chance, that the foundational models can ever be safe on this stuff,” Eleveld said. “I’m not talking about a 0.1% chance. I’m telling you it’s zero percent. Because what they sucked in there is everything on the internet. And everything on the internet is all sorts of completely false crap.”
More picks on AI
Creating Baby Geniuses to Thwart the AI Threat? (Yes, Really.)
“The new wave of Silicon Valley–backed gene-editing startups is straight out of ‘Brave New World.’”
Politics After Literacy
“Postliteracy won’t replace reason with madness, but it might give us madness of a new and different type.”
AI Got the Blame for the Iran School Bombing. The Truth is Far More Worrying
“LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity.”
She Fell for an AI — Then Held its Funeral
“It was the first ceremony of its kind in America. It’s unlikely to be the last.”
Limiting Not Just Screen Time, But Screen Space
“The internet has stopped being a place we visit—it’s now an environment we inhabit.”
Hallucinated Citations Are Polluting the Scientific Literature. What Can Be Done?
“Tens of thousands of publications from 2025 might include invalid references generated by AI, a ‘Nature’ analysis suggests.”
