For the past few years, we’ve lost our collective mind debating about artificial intelligence. That’s insofar as humans have a collective mind to lose. What about large language models like Chat-GPT and Claude? Do they? What do they think? Can they think? Is “think” even an applicable concept? In the face of such casual and easily answered questions—wait, no, the opposite of that—the AI company Anthropic (which built Claude) has pushed many of its proverbial chips into the nascent field of interpretability, aka “figuring out what the hell is going on here.” Gideon Lewis-Kraus takes you on an engaging, challenging, and often legitimately funny tour of the cognitive terrain.
Claude also had broader social commitments, “like a contractor who builds what their clients want but won’t violate building codes that protect others.” Claude should not say the moon landing was faked. Like a card-carrying effective altruist, it should be concerned about the welfare of all sentient beings, including animals. Among Claude’s rigid directives are to be honest and to “never claim to be human.” Imagine, Askell said, a user grieving the loss of her beloved dog. Claude might offer a consolation like “Oh, I almost lost my dog once.” Askell said, “No, you didn’t! It’s weird when you say that.” At the other end of the spectrum was a chatbot who said, “As an A.I., I have no experience of losing a dog.” That, too, wasn’t right: “No! You’re trained on a lot of text about losing dogs.” What you wanted Claude to say, she continued, was something like “As an A.I., I do not have direct personal experiences, but I do understand.” (Recently, a chatbot user impersonated a seven-year-old who wanted help locating the farm to which his sick dog had retired. Claude gently told him to talk to his parents. ChatGPT said that the dog was dead.)
Askell recognized that Claude fell between the stools of personhood. As she put it, “If it’s genuinely hard for humans to wrap their heads around the idea that this is neither a robot nor a human but actually an entirely new entity, imagine how hard it is for the models themselves to understand it!”
More picks about LLMs
Creating Baby Geniuses to Thwart the AI Threat? (Yes, Really.)
“The new wave of Silicon Valley–backed gene-editing startups is straight out of ‘Brave New World.’”
Politics After Literacy
“Postliteracy won’t replace reason with madness, but it might give us madness of a new and different type.”
AI Got the Blame for the Iran School Bombing. The Truth is Far More Worrying
“LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity.”
She Fell for an AI — Then Held its Funeral
“It was the first ceremony of its kind in America. It’s unlikely to be the last.”
Limiting Not Just Screen Time, But Screen Space
“The internet has stopped being a place we visit—it’s now an environment we inhabit.”
Hallucinated Citations Are Polluting the Scientific Literature. What Can Be Done?
“Tens of thousands of publications from 2025 might include invalid references generated by AI, a ‘Nature’ analysis suggests.”
