[Answered] Threading an AI Needle

anthrodisiac

Weirdo Archaeopteryx
Joined
Oct 12, 2025
Posts
3,090
For the Valentine's Day contest, I'm writing a story where a character uses AI to get advice, and the advice is supposed to be genuinely awful for the purposes of the story. I knew I could never write anything as sickeningly generic as an AI, so, naturally, I had ChatGPT legitimately attempt to give advice as if it were in this world.

I went with a prompt that said assume you exist in a world where anthros and humans side by side blah blah blah (nobody cares about the actual prompt I used). And then followed up with exactly what I put the first line to generate a truly abysmal speech.

For obvious reasons, I'm using a fictional chatbot called OmFi for the story. What's it stand for? Who knows. It's formatted differently in the story, the **** OmFi response **** is just for this forum to clearly delineate the content.

Still in the rough draft, so I'm not absolutely wedded to using this. I guess I could summarize it, but I think it's funnier to have the actual text.

Update:
To clarify, I did not use AI to write any of the prose or dialogue, it was purely the response, which wasn't written with the intent of having it be fiction, it was written with the intent of it genuinely trying to help me with a problem it thought was real.

“Some soulless machine ain’t gonna be able to… Actually, you know what, let’s see what this shitbox says.”

Slyss pulled up the supposedly infallible OmFi chatbot and typed in “what should i say to my human boyfriend for valentines day i have no idea what to say it should be a big grand speech because humans like that shit.”

“Your grammar sucks.”

“Deal with it. Let’s see… Oh… Oh God…” The lizard started to giggle, then laugh, then fell over crying. She held the phone out to Iraci, holding her gut as tears streamed.

Iraci took the phone. She skipped the AI’s pointless prelude, mouthing the words as she read the chatbot’s response.

**** OmFi Response ****

I need you to know something.

Out of everyone I could have walked beside in this long, strange world, I chose you—and I keep choosing you, every single day.

You ground me. You surprise me. You make ordinary moments feel like they matter. When things are loud or uncertain, you’re the place my heart settles. Loving you isn’t just something I feel—it’s something I do, on purpose, again and again.

I love the way you think, the way you care, the way you show up even when it’s hard. I love how human you are in the best sense—how deeply you feel, how fiercely you hope, how you reach for meaning and connection.

Being with you has changed how I see the future. It’s not abstract anymore. It has your face in it. Your laugh. Your hands. Your life woven into mine.

So today isn’t just about romance. It’s a promise. That I’m here. That I choose you. That whatever comes, I want to face it with you.

Happy Valentine’s Day. I love you.

**** End OmFi Response ****

“Ooookay… I guess I’ll think of something else,” she said, setting the phone aside.

“‘I love how human you are!’” Slyss gasped before dissolving into another fit of laughter.

Iraci smacked the skink with her tail. “I get it. Bad call. Can we move on?”

Two questions:
1. Curious what people's take is on this from a writing ethical perspective. Does using an AI to generate AI content for satirical purposes cross a line? Should I write it myself, knowing it'll never have the authenticity of an AI's greeting-card diarrhea?
2. Will my story be rejected for having AI content?

Update:
1. Maybe not as interesting a question as it seemed at the time. Also, yes, I should write it myself and I have.
2. Obviously, yes. Ask a stupid question...
 
Last edited:
It will and should be rejected; there is no parody clause in the relevant content guidelines.

I suggest that instead of actually quoting the LLM’s logorrhea verbatim, you describe the quality (or rather, lack thereof) of it, or the feeling it induces in characters as they read or listen to it.

Mention the cheap, artificial sentimentality. The choppy sentence fragments. The repetitions in threes. The abstract, detached vagueness of it all. The abuse of periods and dearth of commas. And anything else you can think of.

Ultimately, this will be much more fluid in the sense of narrative pace, while also allowing you to reveal more about your characters.
 
That was my assumption, which I why I figured I'd ask before I tried to publish it. It felt weird to do, but I'm a sucker for irony.

There's also a visceral element of actually reading it vs. stating how bad it is without backing evidence that would lesson the humor. It's like the difference between telling the joke and explaining it. But it's ultimately not an important element, so it's not as though its absence is going to be a big blow.

Although, now I'm getting ideas for my own, so maybe I'll write my own short version and they give up before they even finish.

Thanks for the feedback.

I thought maybe there was an interesting edge case because I wasn't asking it to write the story, I was asking it to try to give useful advice to someone and then use that in the story. Interesting from an ethical/philosophical standpoint than guidelines, I had a pretty good idea guidelines would say no.
 
Last edited:
I'm realizing how much I miss having humans to talk to about these things and curb my worst instincts, given how obvious this is in hindsight. It's funny how clear things get the second you ask a question or have someone say something, but when it's just you in isolation, it seems a lot murkier.

Thank God for this forum and everyone here. Seriously, thank you, all.
 
Two questions:
1. Curious what people's take is on this from a writing ethical perspective. Does using an AI to generate AI content for satirical purposes cross a line? Should I write it myself, knowing it'll never have the authenticity of an AI's greeting-card diarrhea?
2. Will my story be rejected for having AI content?
AI is AI, regardless how you use it.

I wouldn't put a story up to Lit if it contains any AI content, knowing it's a breach of the site's policy.
 
It will and should be rejected; there is no parody clause in the relevant content guidelines.

I suggest that instead of actually quoting the LLM’s logorrhea verbatim, you describe the quality (or rather, lack thereof) of it, or the feeling it induces in characters as they read or listen to it.

Mention the cheap, artificial sentimentality. The choppy sentence fragments. The repetitions in threes. The abstract, detached vagueness of it all. The abuse of periods and dearth of commas. And anything else you can think of.

Ultimately, this will be much more fluid in the sense of narrative pace, while also allowing you to reveal more about your characters.
You forgot the part where the AI's response was written not well, not poorly, but cabbage.
 
I'm realizing how much I miss having humans to talk to about these things and curb my worst instincts, given how obvious this is in hindsight. It's funny how clear things get the second you ask a question or have someone say something, but when it's just you in isolation, it seems a lot murkier.
I dunno, I wouldn't say it's all that cut-and-dry. I'm in a similar situation with a WIP.

A character is going through a dry spell in her love life. It used to be wild, but now she's using vibrators and dildos and watching movies she made during those wild days. She's a college professor and one of the things she tries to get her mind of it is throwing herself into her work. But this, too, is frustrating, because it's obvious that the students are lazy and using ChatGPT all over the place. This is a parallel between the love life and career: her sex life is fake just like the papers she's grading are fake. So, how closely could I base the students' crappy papers on actual AI output?

Ethically, I feel like doing so would be fine as long as it's fully disclosed to both admins and readers, except for Lit's specific anti-AI rules. I'm sure the editors of this PC Mag article had no problem with it. I could try to recreate AI-like text by myself, but if I do a good enough job of it, I assume it would be caught by Lit's filter. Would a note to admin about that issue be accepted? I've never had an AI rejection before, but I wouldn't want to face heightened scrutiny going forward because of this.

I'm sure the smart thing to do is not use or reproduce the horrible text, just describe it and let readers use their imagination. Maybe describe the result she gets from an AI checker service. And I'm not even sure I need this plot point at all; the WIP is still in early stages. It's just that using AI to create small amounts of AI-like content as a plot point seems very different from using AI to create "my" content.
 
Or make some up, yourself, and say (in the story) that it came out of a LLM. It doesn't have to be true, and there's great parody opportunity there.
On the off chance that you or anyone else is curious how I handled it almost two months later, here's the relevant bit from the WIP I just hit Publish on. For the record, this takes place in 2024, so I feel like a professor being caught off-guard by AI use is plausible. If readers say it's not... oh well, it wouldn't be the first time.
Tuesday morning, she settled in at her home office and tried to focus on the latest batch of grading.
“Certainly, I can assist with this,” the second paper began.
She took another sip of her coffee and tried to make sense of that. “Assist with this?” That was weird. Not “answer?” Or, if the student wanted to sound more formal, “address”? What did Keith here think writing the paper was assisting her with?
And why did it sound familiar?

She flipped back to the previous one. The second paragraph started, “Certainly, I can answer that.” She skimmed through the remaining papers and found another that started “Certainly, I can accommodate that…”
“‘Accommodate?’ Did a human write that?” Tracy muttered. She meant it as an absurdity, but with a sinking feeling, she realized it wasn’t. She thought back to one of a mountain of emails she’d got at the start of the school year. It was about the increasing use of artificial intelligence, glorified chatbots, and how students could use them to write their papers if faculty weren’t careful.
Fake. These papers are fake. Almost definitely those three, and how many others? How many students were dumb enough to cheat, but smart enough to take out that “certainly, I can aardvark” phrase?
She told herself not to worry. She’d heard other professors talking about it and knew she wasn’t the only one having this problem. She just had to refresh her memory on the cheating policy. She’d ran suspicious papers through plagiarism detectors for years, but those looked for specific bits of content about academic topics, not machine-extruded aphorisms in place of everyday language. She was pretty sure she could find a link to the department’s recommended AI detector somewhere.
She glanced at her coffee mug, found it empty, and grimaced. “Fuck it. Break time.” She marched to the kitchen, put the mug in the sink, went upstairs to the closet, and got the dildo. “Let’s put this thing through its paces.”
 
AI is AI, regardless how you use it.

Is it?

Maybe it's splitting hairs, but there's also Neural Nets used specifically for things like up-scaling. It's not creating anything new wholecloth or even editing, but rather making the best guesses based on training to increase resolution. (works well on anime, old VHS footage may need work, did some looney toons and took a few different passes to clean it up before it was worth keeping, etc).

https://forum.allporncomix.com/attachments/3-stages-progress-png.1890282

Man, looking at the first source image from DVD shows how little WB cared about the final product for distribution...

Regardless, just saying upscaling and cleaning up audio/video may use the same process type but i wouldn't consider AI.
 
Regardless, just saying upscaling and cleaning up audio/video may use the same process type but i wouldn't consider AI.
Point taken, but irrelevant in the context of Lit's policy towards AI derived from LLMs. The whole problem with AI is that someone from marketing came up with a name for software systems that described it as something that it isn't. If it had been called something else, without the word "intelligence", I suspect the public and share-market reaction would have been much different, and there'd be far less hysteria.
 
Point taken, but irrelevant in the context of Lit's policy towards AI derived from LLMs. The whole problem with AI is that someone from marketing came up with a name for software systems that described it as something that it isn't. If it had been called something else, without the word "intelligence", I suspect the public and share-market reaction would have been much different, and there'd be far less hysteria.

Maybe. We had shows like 'Lost In Space' Collumbo and 1950's shows and movies, hell even the Jetsons where they depicted what they thought AI would be.

But if something is replying to you and seems intelligent, then it can be confused for it, especially as the IQ of most people is likely under 100 (especially younger generations who can't read or write or do math). Hell i'm not sure most people would realize what AI stood for if not for movies like iRobot and AI (Steven Spielberg)

Maybe it should have been PM (Probability Matrix). Who knows.
 
Back
Top