For the past few years, we’ve lost our collective mind debating about artificial intelligence. That’s insofar as humans have a collective mind to lose. What about large language models like Chat-GPT and Claude? Do they? What do they think? Can they think? Is “think” even an applicable concept? In the face of such casual and easily answered questions—wait, no, the opposite of that—the AI company Anthropic (which built Claude) has pushed many of its proverbial chips into the nascent field of interpretability, aka “figuring out what the hell is going on here.” Gideon Lewis-Kraus takes you on an engaging, challenging, and often legitimately funny tour of the cognitive terrain.
Claude also had broader social commitments, “like a contractor who builds what their clients want but won’t violate building codes that protect others.” Claude should not say the moon landing was faked. Like a card-carrying effective altruist, it should be concerned about the welfare of all sentient beings, including animals. Among Claude’s rigid directives are to be honest and to “never claim to be human.” Imagine, Askell said, a user grieving the loss of her beloved dog. Claude might offer a consolation like “Oh, I almost lost my dog once.” Askell said, “No, you didn’t! It’s weird when you say that.” At the other end of the spectrum was a chatbot who said, “As an A.I., I have no experience of losing a dog.” That, too, wasn’t right: “No! You’re trained on a lot of text about losing dogs.” What you wanted Claude to say, she continued, was something like “As an A.I., I do not have direct personal experiences, but I do understand.” (Recently, a chatbot user impersonated a seven-year-old who wanted help locating the farm to which his sick dog had retired. Claude gently told him to talk to his parents. ChatGPT said that the dog was dead.)
Askell recognized that Claude fell between the stools of personhood. As she put it, “If it’s genuinely hard for humans to wrap their heads around the idea that this is neither a robot nor a human but actually an entirely new entity, imagine how hard it is for the models themselves to understand it!”
More picks about LLMs
Why Conservationists Are Making Rhinos Radioactive
“Rapid DNA tests, x-ray fluorescence guns, and other technologies are being deployed in the fight against wildlife trafficking.”
Why You’re More Likely to Develop AI-Psychosis than to Join a Cult
“Philosopher Lucy Osler on the insidious appeal of AI Chatbots.”
America Isn’t Ready for What AI Will Do to Jobs
“Does anyone have a plan for what happens next?”
Why Does A.I. Write Like … That?
“If only they were robotic! Instead, chatbots have developed a distinctive—and grating—voice.”
Kicking Robots
“Humanoids and the tech-industry hype machine.”
Ed Zitron Gets Paid to Love AI. He Also Gets Paid to Hate AI
“He’s one of the loudest voices of the AI haters—even as he does PR for AI companies. Either way, Ed Zitron has your attention.”
