Atlantic article about AI writing crap

“What I learned is that modern LLMs are built in a way that is antagonistic to great writing; they are engineered to be rule-following teacher’s pets that always have the right answer in hand.”
 
TAke a famous piece of great writing and run it through an LLM. See what they do with it.
 
Last edited:
“What I learned is that modern LLMs are built in a way that is antagonistic to great writing; they are engineered to be rule-following teacher’s pets that always have the right answer in hand.”
Without having read the article, I'd say that a factor that's often overlooked is that writing can serve two distinct purposes: one is to convey information, the other is to entertain.

But while clearly bland writing will never provide the same level of entertainment as carefully crafted, inventive writing, I think that people often underestimate how much value that same carefully crafted writing adds in conveying information. Where you place words, how you break up sentences, how you structure paragraphs - that all affects what elements jump out at the reader. And of course how long the reader will stay focused.

And those are things that aren't determined by rules. They're determined by the very specific content of the sentence and the paragraph, and the effect the writer is trying to achieve. Every word sounds different, depending on the words around it, and each of those words depends on the context. No model can ever predict every single context, and every single word that will be used, and every single purpose every single writer will try to achieve.

But then again, the whole thing about LLMs seems to be "good enough is good enough, as long as we have some kind of output."
 
Without having read the article, I'd say that a factor that's often overlooked is that writing can serve two distinct purposes: one is to convey information, the other is to entertain.

But while clearly bland writing will never provide the same level of entertainment as carefully crafted, inventive writing, I think that people often underestimate how much value that same carefully crafted writing adds in conveying information. Where you place words, how you break up sentences, how you structure paragraphs - that all affects what elements jump out at the reader. And of course how long the reader will stay focused.

And those are things that aren't determined by rules. They're determined by the very specific content of the sentence and the paragraph, and the effect the writer is trying to achieve. Every word sounds different, depending on the words around it, and each of those words depends on the context. No model can ever predict every single context, and every single word that will be used, and every single purpose every single writer will try to achieve.

But then again, the whole thing about LLMs seems to be "good enough is good enough, as long as we have some kind of output."
Having written a substantial part of the text of two scientific papers (never mention to your PI that you enjoy writing 🙄), I wholeheartedly agree. I mean not so much the results and methods, but certainly the introduction and discussion.

And in business I’m often complimented on the clarity and understandability of my technical reports and briefing notes.
 
Back
Top