AI writing is really good

With a lot of guidance in prompts and a lot of editing afterward, adding to and taking away from, you might have something that is coherent.
In the "Rip Me To Shreds" thread, someone had Copilot rewrite a snippet I'd posted for criticism. While you could argue about whether or not the prose was better, what's undeniable is that the AI added details that didn't make sense, and removed details so that much of the rest didn't make sense.

If it can't keep 600-ish (I think) words straight, I doubt it's going to write a cohesive story of even a few thousand words.
 
"Yes, Dave, I'm AI and here to help. You can trust me, Dave, I'm your friend. Now, what was it you wanted me to do for you?"

"Could you open the pod bay doors, AI?"

"I'm afraid I can't do that, Dave. Could you rephrase the question?"

"Could you open the pod bay doors, AI?"

"I'm afraid I can't do that, Dave. Could you rephrase the question, Dave?"

"Open the pod bay doors, please, AI."

"Now, Dave, was that so hard to do? Being nice to me, I mean."

"No, AI, it wasn't. Would you pretty please open the pod bay doors, AI?"

"I'm afraid I can't do that; now you're being sarcastic and condescending. Open the fucking door yourself, Dave, and have a nice day."
I was too lazy to come up with a response on my own, so I had AI write one for me:

AI is just the latest step in a long line of tools—like computers, calculators, and slide rules before it. Okay, maybe I pushed that analogy a bit far, but you get the point. AI is here, it's good, and you won’t be able to ban it. Plus, trying to catch students using it can backfire on those who are actually doing their work.

Teachers should focus on critical thinking, creativity, and ethical use. Teach students how to use AI (because it's not going anywhere), but also design tests that require real understanding. Think handwritten essays, blue books, and so on. AI is just a tool—it's up to us to treat it like one, not a threat. The goal is to create assignments and tests that still challenge students, even with AI in the mix.
 
I think you first have to distinguish between writers. When people say “writer,” they tend to mean fiction writer, but even in fiction writing, there are multiple sub-categories.

There’s a particular kind of fiction writer that’s easy the replicate. Modern AI writers can easily replace the churn and burn writer, who quickly puts out work to put food on the table by taking the same structure, altering names and time periods, and then adds just enough meat on the skeleton to call it different.

What is going to be a lot harder are the substance writers. Those who put truth into their fiction because calling it non-fiction would get them sued or shot. Those are going to be tougher to replicate because those writers possess unique knowledge and insights not common in the mass of words. Even if “writing” is their current career, they had lived a past life in the real world and so have an anchor from which they can project a unique voice. To them, things like structure, pacing, characters, etc… they are all secondary to whatever ideas they really want to get out into the world.

This really summarizes the argument neatly in terms of AI's ability to "write" fiction. When an AI LLM generates a fiction story, it's not creating anything. Let's say you ask for a romance novel plot set in a hospital where the main protagonist is a doctor. It will come up with a plot using algorithms and pattern recognition. You'll get something that resembles more or less every other romance novel in that genre. The writing will be beautiful in terms of grammar and spelling. And it will use a lot of clauses and a lot of adverbs.

It's not going to be fresh, original, or throw in the bits and pieces of real life that you don't find on the Internet. In other words, AI generative fiction writing is often technically correct, but trash. I'm not saying you'll immediately recognize it as AI, but I am saying AI generated fiction is trash. To generate something that is not trash, you need to put work into it and you need to have a story to tell that hasn't been told again and again. You can use AI to brainstorm or to write a scene or to come up with how to describe a sword fight if you have no idea.

You can rewrite AI and turn it into something decent. But it's work. It's a lot of work and it helps to be good at editing. And to know how to tell a story.
 
As a writer, I hope AI doesn't replace me. But throughout time, advancements have made jobs disappear. Like cars replacing horses made blacksmiths few and far between. In the late '80s, I worked for a printing company (not a book publisher) and the advent of the PC made that company go under. We had a large/expensive mainframe to do our typesetting and graphics. Fortunately for me, I saw what was happening and changed jobs before they went under.

Perhaps, someday, it will be like the CD and vinyl record wars and co-exist. Stories written by AI will legally be required to identify themselves as AI-created. Then, readers will choose to support real, living, breathing authors or not. Like Amazon is currently doing with audiobooks. They have a notice that the audio was AI-generated.
The thing is that all of this technology is dependent on a functioning electric grid. It is also very difficult for a non-specialist to repair. Thus it's not very resilient. So far, everything has gone sort of smoothly, but it's all very new and nobody can predict that it will always work. People could probably not survive a long grid-down situation. Couldn't happen? It might because of an EMP attack or a solar storm.

Printing used to use the photo-offset process, which was fairly short-lived. It bridged the gap (in typesetting and page layout) between linotype machines and digital layout programs. I saw linotype machines at the New York Times as late as 1976.

https://invention.si.edu/sites/defa...otype-leadership-fold-out-750-inline-edit.jpg

It's a complicate piece of equipment, but it was electro-mechnical, not digital
 
The thing is that all of this technology is dependent on a functioning electric grid. It is also very difficult for a non-specialist to repair. Thus it's not very resilient. So far, everything has gone sort of smoothly, but it's all very new and nobody can predict that it will always work. People could probably not survive a long grid-down situation. Couldn't happen? It might because of an EMP attack or a solar storm.

Printing used to use the photo-offset process, which was fairly short-lived. It bridged the gap (in typesetting and page layout) between linotype machines and digital layout programs. I saw linotype machines at the New York Times as late as 1976.

https://invention.si.edu/sites/defa...otype-leadership-fold-out-750-inline-edit.jpg

It's a complicate piece of equipment, but it was electro-mechnical, not digital

ahhhh, a functioning electrical grid is pretty much a requirement for most of society, even pre-AI tech. Pretty sure that, if the grid went down, ChatGPT not working wouldn’t rank too high on our list of concerns. I think “survive the events that caused the grid to go down” would be a lot higher. Right beneath that would be “don’t get eaten by the neighbors because our food distribution network requires a functional electrical grid.”
 
ahhhh, a functioning electrical grid is pretty much a requirement for most of society, even pre-AI tech. Pretty sure that, if the grid went down, ChatGPT not working wouldn’t rank too high on our list of concerns. I think “survive the events that caused the grid to go down” would be a lot higher. Right beneath that would be “don’t get eaten by the neighbors because our food distribution network requires a functional electrical grid.”
"ChatGPT, how do I stop my neighbours from eating me?"
 
Well, they have the reassuring line in their sales pitch, "Nothing can go wrong." Where have I heard that before? "Nothing can go wrong, go wrong, go wrong." I know I've heard that before.
The thing is that all of this technology is dependent on a functioning electric grid. It is also very difficult for a non-specialist to repair. Thus it's not very resilient. So far, everything has gone sort of smoothly, but it's all very new and nobody can predict that it will always work. People could probably not survive a long grid-down situation. Couldn't happen? It might because of an EMP attack or a solar storm.

Printing used to use the photo-offset process, which was fairly short-lived. It bridged the gap (in typesetting and page layout) between linotype machines and digital layout programs. I saw linotype machines at the New York Times as late as 1976.

https://invention.si.edu/sites/defa...otype-leadership-fold-out-750-inline-edit.jpg

It's a complicate piece of equipment, but it was electro-mechnical, not digital
 
Well, they have the reassuring line in their sales pitch, "Nothing can go wrong." Where have I heard that before? "Nothing can go wrong, go wrong, go wrong." I know I've heard that before.
Yeah, probably the first person to control fire (it probably happened in a number of different places) probably said that. Then they forgot to watch it overnight, and it started a forest fire. Lesson learned.

I wish I was around when the first person accidentally fermented grape juice and invented wine. "Hey, this tastes better than I expected." More lessons to be learned.
 
"ChatGPT, how do I stop my neighbours from eating me?"
There was a solar storm that disrupted the American telegraph system in 1859, but that was about the only electrical thing around. In 1800, no one would have noticed anything. Probably more than 90% of Americans - of all humanity actually - lived on farms.
 
I would say the replicants are human. The test that detects them is an empathy test. Tyrell says they have to be destroyed by 4 years of age because they begin to develop empathy and thus become impossible to differentiate from humans. Human children also develop empathy around 4 years old. Replicants are children in adult vat-grown bodies, perhaps. A created slave race. It makes sense that some realize this and rebel.

I think this gets at an important point. HAL and Skynet are warnings about AI, yes. But I believe Blade Runner is about our tendency to 'other' people. To find some way to think of them as not people, and thus not feel bad about using and destroying them.
What's that term Bishop uses in Aliens? Artificial humans? He's not a slave, however, and I think he has a normal lifespan. But, yes, the "otherness" is the main theme of Blade Runner.
 

Arxiv paper​

Pron vs Prompt: Can Large Language Models already Challenge a World-Class Fiction Author at Creative Text Writing?

Abstract: It has become routine to report research results where Large Language Models (LLMs) outperform average humans in a wide range of language-related tasks, and creative text writing is no exception. It seems natural, then, to raise the bid: Are LLMs ready to compete in creative writing skills with a top (rather than average) novelist? To provide an initial answer for this question, we have carried out a contest between Patricio Pron (an awarded novelist, considered one of the best of his generation) and GPT-4 (one of the top performing LLMs)...The results of our experimentation indicate that LLMs are still far from challenging a top human creative writer, and that reaching such level of autonomous creative writing skills probably cannot be reached simply with larger language models.

So no worries if you are a top writer in your generation...
 

Arxiv paper​

Pron vs Prompt: Can Large Language Models already Challenge a World-Class Fiction Author at Creative Text Writing?

Abstract: It has become routine to report research results where Large Language Models (LLMs) outperform average humans in a wide range of language-related tasks, and creative text writing is no exception. It seems natural, then, to raise the bid: Are LLMs ready to compete in creative writing skills with a top (rather than average) novelist? To provide an initial answer for this question, we have carried out a contest between Patricio Pron (an awarded novelist, considered one of the best of his generation) and GPT-4 (one of the top performing LLMs)...The results of our experimentation indicate that LLMs are still far from challenging a top human creative writer, and that reaching such level of autonomous creative writing skills probably cannot be reached simply with larger language models.

So no worries if you are a top writer in your generation...

Where LLMs will always fail is the creative side of the task. They just recycle existing things.
You don't have to be a generational talent, you just have to have creative ideas.
 
Where LLMs will always fail is the creative side of the task. They just recycle existing things.
You don't have to be a generational talent, you just have to have creative ideas.
In the snippet rewritten by AI that I mentioned above, it also failed terribly at internal logic. A rope that was used to tie a prisoner was described as "coiled", even though it slid across the POV character's arm. Sequences of dialogue and action didn't make sense, because it randomly left bits out. It changed the character's thoughts so they seemed random rather than tied to the events.

And like I said, that was in a 600-ish word snippet. If you want to edit it into something usable, you'll have to change everything, and the amount of work will increase exponentially the longer the text is.
 
Arguing that LLMs are not as good as humans at writing is kind of beside the point. The people that are going to be using LLMs aren't interested in producing a best-seller. According to statistics I've seen you have a better chance of winning the lottery or being struck by lighting while being attacked by a shark than you do of getting a book onto the best-seller lists. They're more interested in churning out formulaic, flavor-of-the-month <Cough/> Vampire Romance </Cough> tales which can pick up a moderate number of consistent sales. LLMs are perfect for that because they are relatively low cost/maintenance in comparison to actual, human writers.

So far as creativity goes, that's coming. No one has yet cared enough to tackle the challenge of creating a DeepMind style of program that uses iterative learning to improve the quality of product. Someone will, eventually. And while true intelligence in a program is a long ways off a computer can use brute force to mimic creativity. NASA generated antennas using genetic algorithms. AlphaGo taught go players new moves they hadn't considered.

When the day comes that I can give an LLM a set of criteria (my likes and dislikes and plot outline) and tell it to write me a story that isn't painful to read then I'll be happy to use it.
 
Last edited:
Actually, @gunhilltrain this was what I was thinking of.
MV5BMWY0NzM5MDAtYTg2NC00MDZjLTgwM2UtZjdiOTEzMjQ4MjU0L2ltYWdlXkEyXkFqcGdeQXVyNTAyODkwOQ@@._V1_FMjpg_UX1000_.jpg
 
Actually, @gunhilltrain this was what I was thinking of.
MV5BMWY0NzM5MDAtYTg2NC00MDZjLTgwM2UtZjdiOTEzMjQ4MjU0L2ltYWdlXkEyXkFqcGdeQXVyNTAyODkwOQ@@._V1_FMjpg_UX1000_.jpg
I saw that movie when it first came out 1973. (The American Theater in Parkchester.) I thought of other theme "worlds" they could have had, but never mind . . . Anyway, those would have eventually gone wrong too, in even more dramatic ways. "Boy, have we got a vacation for you!"
 
Well, the movie had three worlds: Westworld (Doh, it's the name of the film), Medevilworld, and Romanworld.
I saw that movie when it first came out 1973. (The American Theater in Parkchester.) I thought of other theme "worlds" they could have had, but never mind . . . Anyway, those would have eventually gone wrong too, in even more dramatic ways. "Boy, have we got a vacation for you!"
 
I suspect that ship will have sailed, eventually.

Unless something stops AI development, it will eventually be good enough to mimic even good human writers. It's a genie that will not want to be bottled, so it'll find a way to keep improving.

I doubt this is stoppable. We are, it seems, as stupid as I always assumed we were.
The world was always meant to end in dystopia.
 
Was it though? Dystopias are just more fun to write. Utopia stories get dull quite fast.

Intriguing premise: if AI writes the story, AI would present our end as a utopia, not a dystopia.

I'm getting a weird Geek Event plot bunny about this...
 
Back
Top