Critiques of ChatGPT’s writing appeared almost as soon as the service became publicly available. But most of those criticisms so far have been fairly vague—variations of “The writing just doesn’t feel human.” I certainly agree, but have also been searching for a clearer argument about why exactly ChatGPT’s writing fails on a technical level.

Laura Hartenberger at Noema delivers exactly that in this cogent, masterful analysis and critique of the content that these types of programs spit out. In doing so, she also wrestles with the central question that has inevitably crept up in the midst of this generative AI hand-wringing: What is good writing, anyway?

When we talk about good writing, what exactly do we mean? As we explore new applications for large language models and consider how well they can optimize our communication, AI challenges us to reflect on the qualities we truly value in our prose. How do we measure the caliber of writing, and how well does AI perform?

In school, we learn that good writing is clear, concise and grammatically correct — but surely, it has other qualities, too. Perhaps the best writing also innovates in form and content; or perhaps it evokes an emotional response in its readers; or maybe it employs virtuosic syntax and sophisticated diction. Perhaps good writing just has an ineffable spark, an aliveness, a know-it-when-you-see-it quality. Or maybe good writing projects a strong sense of voice.

But then, what makes a strong voice, and why does ChatGPT’s voice so often fall flat?