Care to share examples of AI slop you've encountered?

ChatGPT is a vital part of my work life, and it knows I won't tolerate hallucinations. Itt always cites references, and knows that I expect high-quality references, and will check them. It still makes mistakes, but only about once in twenty or so responses, which is pretty damn good.
Basically I use it as a gopher for web research.
 
ChatGPT is a vital part of my work life, and it knows I won't tolerate hallucinations. Itt always cites references, and knows that I expect high-quality references, and will check them. It still makes mistakes, but only about once in twenty or so responses, which is pretty damn good.
Basically I use it as a gopher for web research.
Five percent error rate, huh? Not something I'd trust.
 
Basically I use it as a gopher for web research.
Yeah, it saves steps to ask "how can I chat with a human support person at xxxxxxxxxxxx." Those answers are usually right. But if there's any possible confusion, like with model numbers or operating system releases, or... or.... I haven't cured it from sounding terribly sure about its wrong answers.

It is handy to be able to feed it a screen shot.
 
ChatGPT is a vital part of my work life, and it knows I won't tolerate hallucinations. Itt always cites references, and knows that I expect high-quality references, and will check them. It still makes mistakes, but only about once in twenty or so responses, which is pretty damn good.
Basically I use it as a gopher for web research.
Do you actually check if the references exist? It will definitely fabricate them.
 
Do you actually check if the references exist? It will definitely fabricate them.

Always. And yes, sometimes it produces links to nonexistent urls.

For python coding, ChatGPT 5.2 is pretty damn good, if all you need is code that works (as opposed to elegant code).
 
Back
Top