AI or LLM

Too much of Wikipedia today is also GenAI
I’m not saying that’s untrue, but

{citation needed}?

What objective data or analysis on this subject have you seen? Wikipedia has enough ESL contributors and translation between different languages that I won’t take “it just seems that way to me” as persuasive.
 
The internet as a whole is filling up with AI slop. Wikipedia is at least human curated, so AI slop gets cut when people notice. Searching wikipedia gets a LOT less AI slop than just asking major search engines to search the web as a whole.
 
I do not use AI in my writing (but I sure use it for work letters!). Honestly, I never thought to do that because it's dirty stuff I write here, and I thought there were restrictions on using AI for +18 content.

I have however used AI to help me understand something related to the story. I almost always just do my own research, but sometimes I just can't find what I need. For example, I wanted to better understand how zero gravity might impact food production for a fledgling space colony. I did not find enough free and easily accessible information for me to learn what I needed, so I asked some questions to AI in plain language and got some good answers.
 
Right. You didn't think about writing it in. The choices an author makes, or the things they omit, even subconsciously, add texture to their story. Not every East Bay story needs to mention Mount Diablo, just as not every story set in Japan needs a Mount Fuji reference.

"J.R.R. Tolkien has become a sort of mountain, appearing in all subsequent fantasy in the way that Mt. Fuji appears so often in Japanese prints. Sometimes it’s big and up close. Sometimes it’s a shape on the horizon. Sometimes it’s not there at all, which means that the artist either has made a deliberate decision against the mountain, which is interesting in itself, or is in fact standing on Mt. Fuji."
-Terry Prachett
 
The internet as a whole is filling up with AI slop. Wikipedia is at least human curated, so AI slop gets cut when people notice. Searching wikipedia gets a LOT less AI slop than just asking major search engines to search the web as a whole.
It's a conspirAIcy I tells ya'!!!
 
As an example of a reason not to use AI, I have an author I used to ghostwrite for. He started using AI, sending me the manuscript to be De-AI-ed and reworked. Instead of paying me thousands of dollars a year to ghost his stories, he's paying almost as much for me to De-AI them, and paying hundreds of dollars a year more for the privilege of publishing one extra book a year. In truth, I believe he is paying me more now than in previous years. I'll know for sure when I get my I9 in January. But we did get four novels published, not three, this year. So, in the long run, maybe he is making more.
 
They can be useful for research purposes, particularly when you need something with multiple criteria.
On a WiP I needed a medical condition with a certain set of criteria, I asked Grok last and got an answer. Then I went and researched the condition. Trust but Verify and all...
 
I don’t know whether anyone but myself will think this is funny, but this made me think

“Not every Chekhov story needs a Mt. Shotgun” 🤣

Perhaps not Mt. Shotgun, but maybe Mt. Pistol, Mt. Airsoft, Mt. Rifle, Mt. Winchester...

My mentor didn't even call it a gun but a needle, and said that a character should be stabbed by that needle.

All I can say is Chekhov is the bloodiest and most violent sunuvabitch to have ever existed.
 
I read or saw an article recently about AI which explained AI is not a big index of information that is logically derived, but, rather, a probabilities engine. Oversimplified, it scours the web for data that appears to be related to what is being prompted, selects the dominant relevant copy, sends it through a language processor, and spits out the result. The AI expert being interviewed said outright, "There will always be errors."

He didn't explain - or I missed it - about hallucinations, AI's prime directive to always present itself as authoritative while it's pumping out bullshit.
 
I wrote "...authoritative while it's pumping out bullshit," and then thought of "Cliff”, the barfly postal worker who hung out at Cheers. That's the ticket. AI is Cliff.
 
Last edited:
I read or saw an article recently about AI which explained AI is not a big index of information that is logically derived, but, rather, a probabilities engine. Oversimplified, it scours the web for data that appears to be related to what is being prompted, selects the dominant relevant copy, sends it through a language processor, and spits out the result. The AI expert being interviewed said outright, "There will always be errors."

He didn't explain - or I missed it - about hallucinations, AI's prime directive to always present itself as authoritative while it's pumping out bullshit.
It's the world's largest autocomplete. Every word it chooses is because of the statistical probability of one word following another.

Hallucinations are simply the AI selecting things that match its statistical model. If you tell it you want a legal briefing, it will have seen documents with case law references, so it does the same. Since it's not referencing any factual data, it just makes it look like what it has seen in training.
 
Back
Top