Did the chatbot Claude select a girls’ school in Iran as a bombing target? In this searing analysis, Kevin T. Baker argues that this question obscures the real concern about AI-fueled air wars. The US military’s quest to collapse the “kill chain,” or “the steps between detecting something and destroying it,” is the result of human decision-making intended to take humans out of decision-making, leaving ample room for error and tragedy:
The target package for the Shajareh Tayyebeh school presented a military facility. Lucy Suchman, whose 1987 book Plans and Situated Actions remains the sharpest account of how formal procedures obscure the work that actually produces their outcomes, would not have been surprised. Plans always look complete afterward. They achieve completeness by filtering out everything that wasn’t legible to their categories. This package looked like every other package in the queue. But outside the package, the school appeared in Iranian business listings. It was visible on Google Maps. A search engine could have found it. Nobody searched. At 1,000 decisions an hour, nobody was going to. A former senior government official asked the obvious question: “The building was on a target list for years. Yet this was missed, and the question is how.” How indeed.
More picks about technology
Limiting Not Just Screen Time, But Screen Space
“The internet has stopped being a place we visit—it’s now an environment we inhabit.”
Is There Life After Smartphones?
“This year, I set out to better understand what was driving this shift — what was causing so many young people to feel fed up with their phones.”
Opposing ICE Might Save the Country. It Could Also Ruin Your Life.
“For months, lone vibe coder Rafael Concepcion has obsessively built tools to counter the federal immigration crackdown—pivoting as he’s been outmatched. He’s also lost his job and become a target.”
