Care to share examples of AI slop you've encountered?

I am having trouble with my heat and thought I'd ask ChatGPT for possible solutions.

I have a tankless water heater/boiler system that supplies both domestic hot water and the water for my baseboard radiators.

If I had followed ChatGPTs instructions on "purging air" from my baseboard radiators I would have dumped the domestic hot water on my basement floor. The valve I needed to open to do that is in a completely different place; it recommended I open a valve on the bottom of the water heater.
 
Perspective, haha. It's just a very small person in the foreground!

It's the tiny volcano in the creek, middle distance, that's got me perplexed. And those fucking huge white birds in the sky, run for cover!
ComfyUI_01555_ (2).png
And what's this? Minis Tirith for ants? Or was Xeno right and that riverside is actually infinitely long?

Also, there aren't supposed to be people in these at all. The prompts were for landscapes. But because one of the terms was 'fantasy' it's added all these androgynous sword-wielders. You can also see traces of the piss-yellow filter common to AI images creeping in here; that's a result of people generating tons of images in the Studio Ghibli style and those images then being used to train other models. If you want a really awful example of that, the covers of Ali Wilde's novels are AI generated. See Bali Bucket List for example (my new low-stakes conspiracy is that there is no Ali Wilde; it's a collection of editors using ChatGPT to write erotica).
 
View attachment 2586700
And what's this? Minis Tirith for ants? Or was Xeno right and that riverside is actually infinitely long?

Also, there aren't supposed to be people in these at all. The prompts were for landscapes. But because one of the terms was 'fantasy' it's added all these androgynous sword-wielders. You can also see traces of the piss-yellow filter common to AI images creeping in here; that's a result of people generating tons of images in the Studio Ghibli style and those images then being used to train other models. If you want a really awful example of that, the covers of Ali Wilde's novels are AI generated. See Bali Bucket List for example (my new low-stakes conspiracy is that there is no Ali Wilde; it's a collection of editors using ChatGPT to write erotica).
Holy hell, look at that sword. They went with the "malformed" style for the crossguard, and I think the blade is wood? And it also merged with the ground. I think it might be a long wand, honestly, but with a crossguard.
 
Holy hell, look at that sword. They went with the "malformed" style for the crossguard, and I think the blade is wood? And it also merged with the ground. I think it might be a long wand, honestly, but with a crossguard.
Also the left hand is a paw, and the index finger on the right hand vanishes. The left leg is pants ending in a clown shoe and the right leg is like a hakama.

In this one, there's a waterfall pouring down a set of otherwise-dry stairs... that lead into a lake. Some interesting window arrangements, too.

ComfyUI_01175_.png

Anyway, the point here is just -- slop!
 
Over the several days (some documented here) of wrestling with ChatGPT to try to get my new impossibly opaque thermostat to work, I've learned two things.

1 - As time went on ChatGPT became progressively more firm in its judgments. "You're not doing anything wrong. You won't be able to set the temperature yourself." "You absolutely will not be able to set the temperature."

2 - It never detected my bad prompts. It never said, "Your statement that it is not programmable does not match what you show on the screen shot." That's just an example. It just took my slop and generated persuasive lies.
 
I've posted examples of what Grok does before but it's quite difficult to talk about what's going on there without breaching the no-politics rule.
Speaking of Grok, people today have discovered that the Grok bot on Twitter will generate deepfakes at will, so that's nice/completely horrifying/literally the kind of behavior that's banned on AI porn subreddits.
 
So I was reading a 2025 book by an author that I've followed since 1985. The plot seemed eerily familiar. I asked ChatGPT if there were any mysteries published where a featured character was a <employment>. ChatGPT told me no, there weren't, but there were some similiar. I knew that the book I was reading did, indeed, feature such an <employment>. So I got more specific. Did <author> write any books that featured an <employment>. ChatGPT told me, confidently, no. I specified the 2025 book I was reading. It still claimed no such character existed.
 
So I was reading a 2025 book by an author that I've followed since 1985. The plot seemed eerily familiar. I asked ChatGPT if there were any mysteries published where a featured character was a <employment>. ChatGPT told me no, there weren't, but there were some similiar. I knew that the book I was reading did, indeed, feature such an <employment>. So I got more specific. Did <author> write any books that featured an <employment>. ChatGPT told me, confidently, no. I specified the 2025 book I was reading. It still claimed no such character existed.
I mean, that could just be that it isnt up to date enough to know of a book published in 2025.
 
I try to steer well clear of anything AI, but this one was hard to avoid.

I'm looking for a GIF for the "finish the damn story" support thread: from the Sylvester Stallone movie Demolition Man, where Nigel Hawthorn's character tells Wesley Snipes's character, "Don't you have a job to do?" (Spoiler: NH has set WS free to wreak havoc so he can strengthen his control.)

The AI search result above the real results is this:

"In the movie Demolition Man, the quote "Don't you have a job to do?" is spoken by John Spartan, played by Arnold Schwarzenegger. This line is part of a conversation where Spartan, a police officer, is confronted about his responsibilities while dealing with a violent crime. The film explores themes of justice and the consequences of violence, highlighting Spartan's struggle with his past actions and the need for order in society."

This is all wrong.
 
It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way--in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.

There were a king with a large jaw and a queen with a plain face, on the throne of England; there were a king with a large jaw and a queen with a fair face, on the throne of France. In both countries it was clearer than crystal to the lords of the State preserves of loaves and fishes, that things in general were settled for ever.

Total slop from the Charles Dickens AI Engine.
 
Perspective, haha. It's just a very small person in the foreground!

It's the tiny volcano in the creek, middle distance, that's got me perplexed. And those fucking huge white birds in the sky, run for cover!

AFAICT, it's interpolating between mountain shapes and tree shapes.

Note how on the mountains on the background, there are some shapes that appear to be pointy peaks, and others that are apparently trees (poorly scaled due to perspective failure), and others that are kind of in between. Note also the tree stump on the left, with a kinked-pointy shape that's quite similar to the mountain peak behind it. The tree on the right is more tree-ish but you can still see some similarity between the shape of its base and the mountains behind it. So that small object in the stream is kind of halfway between a tree and a mountain.
1768343226140.png
 
And if you look at the treeline, the dark green, you can see mountain-shapes in there behind the trees.
Also a couple of spots near the base of the right-hand mountain where the trees are becoming shadows on the mountainside. It's like a crappy Escher ripoff.
 
Here's a screen shot of ChatGPT's response to questions I had about a particular book that I was reading. I changed the novel name, author and profession to avoid spoilers. The book I was reading was published in 2025. There was no other book by that name written by the author. The killer was a lumberjack. I publish this here to demonstrate ChatGPT's insistence that it is correct.

Novel Name (Author, 1992)

  • The killer is not a lumberjack
  • No murderer poses as, is described as, or functions as a lumberjack
  • No tree harvesting or tree cutting role is tied to the perpetrator
  • The crime is rooted in psychological pathology and prior violence, not a lumbering relationship
So if you distinctly remember ā€œthe lumberjack did itā€, one of three things is almost certainly happening:
 
Last edited:
Here's a screen shot of ChatGPT's response to questions I had about a particular book that I was reading. I changed the novel name, author and profession to avoid spoilers. The book I was reading was published in 2025. There was no other book by that name written by the author. The killer was a lumberjack. I publish this here to demonstrate ChatGPT's insistence that it is correct.

Novel Name (Author, 1992)

  • The killer is not a lumberjack
  • No murderer poses as, is described as, or functions as a lumberjack
  • No tree harvesting or tree cutting role is tied to the perpetrator
  • The crime is rooted in psychological pathology and prior violence, not a lumbering relationship
So if you distinctly remember ā€œthe lumberjack did itā€, one of three things is almost certainly happening:
Which is why folk are constant saying, don't believe a word of an AI response, because it makes stuff up, all the time. If one part of response is demonstrably wrong, wouldn't you then doubt every other part of the response?

I add -ai to every google search I do now, because I don't want to see the AI garbage at all.
 
I think most people agree that AI can give you inaccurate answers. I'd like to see examples of those inaccurate answers. I want to get more and more educated about AI's uses (such as instructions on navigating support websites) and failures. Here are two that I bumped into this week.

I'd gone to our vacation place without a certain cookbook, so I went to ChatGPT to ask for the New York Times Menu recipe for Laurel Rice. In the first response, it claimed to be giving me a recipe from the New York Times cookbook (not Menu... important), but there were no bay leaves. That's why it's named "laurel."

In the second it added bay leaves, apologized for not referencing the NYTM cookbook, and gave me a recipe that included cream.

In the final try it seemed to give me the recipe I recognized.

****************
I'm reading a 997 page mystery, and I couldn't remember why a certain suspect had been rejected. I never did get a satisfactory response. It's replies contained weird combinations of specifics (it knew the names of the characters) and generalizations, "physical characteristics didn't match" was an example. I was left unsatisfied.

I don't think you need examples (although its always funny to see the bloopers) because there are pretty clear principles at play.

How AI works. Current LLMs are trained to generate language (bit-by-bit; literally token-by-token). They have a mechanism (called "attention") that ensures that the generated text is related to your prompt. If I say "peanut butter and _____" you'd probably say "jelly". GPT-3.5 was like ~85% likely to say "jelly" and about ~11% likely to say "bananas" with the remaining ~4% on less probably responses. That's all it's doing, making up bits of language that are related to your prompt.

It's debatable, but I think some degree of, or facsimile of, reasoning is an emergent property that we don't fully understand... at least insofar as it can produce outputs (like computer software, math answers, etc.) that seem to require reasoning in humans. But that "reasoning" isn't entirely what humans call intelligence. For example, intelligent humans probably know a lot of facts, we really don't now what an LLM "knows." (We know it encodes some facts about the real world in it's parameter weights and we know that we did not provide anything like a human long-term memory.)

AI generates language. So, I think that's all you need to know. You can trust AI as far as you can throw it.

Use it to generate language, then read the language and fix it. Ask it to explain something to you, but check that the explanation is not BS.

Do NOT (ever) ask it for a fact that you don't know unless that fact is essentially text that can be generated. Like I use it to translate my English slides and quiz questions into Chinese and it's (gpt-4) at least 90% accurate because that's it's thing (making up text).

Asking for unknown facts. But ask it who was the relief pitcher for the winning team in the 1929 world series? What it will do is construct a response like the "peanut butter and _____" problem. It will make up a response, token-by-token, that will be related to the question. Maybe it will happen to be correct, but more than likely it will be a name of (or a name similar to) some 1920's era baseball player (maybe a pitcher).

So, don't do that. Do not ever ask for a factual answer when you do not know the factual answer (or cannot easily check the answer).

As a search engine. Do not use AI like a search engine.

(Unless you tell it to perform a search and summarize the responses, because it can reasonably be expected to be able to summarize if it has the capability to perform a web search).

Limited training data. Whatever an AI "knows" is limited to the training data, which has a cutoff date. And probably holes. It's a myth that AI was trained on "everything" on the Internet or that all human knowledge is on the Internet.

Publishers have tried, for example, to keep copyrighted books off the Internet. I'm sure some books were used, but it's a stretch to assume that AI has memorized all books. If it's discussing aspects of a book, it's far more likely that you're seeing a reflection of people discussing that book on a site like reddit. If it can generate the first page of a book, you're probably seeing the results of scraping storefronts that show the first few pages of a book (particularly if that book is popular and is scraped from many storefronts). I'm not trying to start any debate about piracy or ethics, just being clear about what you can expect from an AI in terms of analysis of a book.

Anthropomorphizing AI. Also, don't anthropomorphize AI, because AI hates that.

Ha ha, but seriously, AI are not people; anthropomorphizing leads to misuse and misplaced expectations. Just because AI can chat does not mean AI can do all the things a human can do. So, if you're unaware of the blind spots, you end up with this mismash of useful and really unhelpful answers.

Example 1: Someone is in the news because she asked Grok to stop making fake near-nudes and the AI said it would stop, but keeps doing it. That's a good example of treating the AI as a person. It has no self-awareness. It cannot make decisions or change it's behavior. It just makes up responses to prompts.

Example 2: AI cannot say "I don't know." You may think you've seen AI do that, but when the AI uses that sort of language, it always means that you have run into a guardrail. The guardrail isn't a statement about the knowledge of an AI, it's about training the AI against answering some inquiries. The AI has no idea what it "knows" or does not "know" so it could never reliably convey to you that it doesn't know something. That alone should give you pause about a lot of use-cases. If you're about to ask the AI something and it matters to you whether it knows X, then you may want to skip asking the AI. Otherwise, ask but verify that response against an authoritative source.

Troubleshooting. For me, asking AI about technical problems is very hit or miss. Fedora 43 changed how automatic updates work and Gemini was flailing. It gave me some good advice and some bad advice. Turns out that to override the default, I needed to copy or create a configuration file but AI had me create it in the wrong folder. OTOH, it suggested disabling ad blocking on gmail to solve a failed smtp problem, and that worked. I never would have thought of that and probably would have discarded it as unrelated if I'd read it somewhere.

But if you're using an interface that can read technical web pages (like ChatGPT) then you can expect it to read and answer from that when you instruct it to do so.
 
The dangers of AI, especially the AI chatbots designed to be your "FRIEND," were highlighted on a recent episode of Watson.

Teen: Is John Watson a good man?

AI Bot: There is much evidence that John Watson is a good man. He saves lives, both directly and indirectly. He is the leading geneticist in the country and is helping find cures for many genetic defects. He runs a philanthropic clinic. I think you're right that he is a good person.

Teen: Despite that, is it possible he is a bad person?

AI Bot: Absolutely, he lied about seeing your mother. He's sleeping with her, behind your back. I think you're onto something here; we need to dig deeper.
 
A super weird trend is occurring on YouTube (where 20% of new videos are now AI slop). Someone famous dies - for instance, Ozzy Osbourne or Sir Tom Stoppard - and these bizarre, AI videos which claim to be footage of the funeral, with famous people performing, appear. Worse, YouTube's algorithms keep promoting these poor quality fakes. Ghoulish.
 
It disappeared, but I asked google lens to search the net for the corgi "want me to eat your homework " meme.

The AI said it was a picture of a tortoise shell cat...
 
Back
Top