For the past few years, we’ve lost our collective mind debating about artificial intelligence. That’s insofar as humans have a collective mind to lose. What about large language models like Chat-GPT and Claude? Do they? What do they think? Can they think? Is “think” even an applicable concept? In the face of such casual and easily answered questions—wait, no, the opposite of that—the AI company Anthropic (which built Claude) has pushed many of its proverbial chips into the nascent field of interpretability, aka “figuring out what the hell is going on here.” Gideon Lewis-Kraus takes you on an engaging, challenging, and often legitimately funny tour of the cognitive terrain.

Claude also had broader social commitments, “like a contractor who builds what their clients want but won’t violate building codes that protect others.” Claude should not say the moon landing was faked. Like a card-carrying effective altruist, it should be concerned about the welfare of all sentient beings, including animals. Among Claude’s rigid directives are to be honest and to “never claim to be human.” Imagine, Askell said, a user grieving the loss of her beloved dog. Claude might offer a consolation like “Oh, I almost lost my dog once.” Askell said, “No, you didn’t! It’s weird when you say that.” At the other end of the spectrum was a chatbot who said, “As an A.I., I have no experience of losing a dog.” That, too, wasn’t right: “No! You’re trained on a lot of text about losing dogs.” What you wanted Claude to say, she continued, was something like “As an A.I., I do not have direct personal experiences, but I do understand.” (Recently, a chatbot user impersonated a seven-year-old who wanted help locating the farm to which his sick dog had retired. Claude gently told him to talk to his parents. ChatGPT said that the dog was dead.)

Askell recognized that Claude fell between the stools of personhood. As she put it, “If it’s genuinely hard for humans to wrap their heads around the idea that this is neither a robot nor a human but actually an entirely new entity, imagine how hard it is for the models themselves to understand it!”

More picks about LLMs

Why Does A.I. Write Like … That?

Sam Kriss | The New York Times Magazine | December 3, 2025 | 4,592 words

“If only they were robotic! Instead, chatbots have developed a distinctive—and grating—voice.”

Kicking Robots

James Vincent | Harper’s Magazine | November 19, 2025 | 7,806 words

“Humanoids and the tech-­industry hype machine.”

Why AI Breaks Bad

Steven Levy | Wired | October 27, 2025 | 2,966 words

“Once in a while, LLMs turn evil—and no one quite knows why.”