Using AI.

The discussion here has been interesting, but as to the specific ban on AI that Lit has imposed and enforced, IMO it comes from a simple problem: an existential threat to the site and the authors who contribute.

The site is very different from when I first became acquainted with it. Back then it was very difficult to get published; I am not illiterate but more than simple literacy was required to get approval. I put the site aside out of frustration for a long while, accumulating for a couple of decades second and third drafts of a few dozen stories altogether. About two and a half years ago I came back, expecting a similar challenge to get published but determined to try, and so I enlisted the aid of an(other) editor. That person certainly helped improve my presentation on the first couple of stories, but eventually I figured out that I could get published here, entirely in my own voice, without the assistance, so that's what I've done; I'm still pleased with the "wall of words" paragraph that I left intact in one of my stories that my editor strongly urged removing entirely. Different strokes, y'know.

I don't think I really changed. I don't know what in the site's policy changed. Turns out, though, that seemingly hundreds (I have not fact-checked) of other writers have learned they can do the same. Dozens of "new" stories are published here every fucking day. I put the word "new" in quotation marks because the stories are repetitive and far from creative. They are not worth the time to sift through, to find the occasional gem.

But at least they are basically grammatical and make some sense.

Now, AI removes even that qualification as a writer. "ChatGPT, please polish up this story for me." Or, worse, "ChatGPT, please write me a story about me mum and sister what got big tits I want to suck." Dozens, hundreds of stories a day? How about a thousand?

Literotica would choke on it's own garbage if AI were allowed. Authors can't find an audience unless they already have one, as it is.
 
Meanwhile, a widely-reported paper by a MIT student claiming big productivity improvements for a workplace using AI has just been disowned by MIT. They haven't released details of what happened, citing student privacy, but it looks like the student got caught committing academic fraud: https://techcrunch.com/2025/05/17/mit-disavows-doctoral-students-paper-on-ai-productivity-benefits/
Which is hilarious in context. All this fingernail biting about AIs hallucinating when we have 'researchers' -- real physical people--making stuff up out of whole cloth all the time. Do I trust my Doctor? Why do you think I get a second opinion?
 
Lit does not allow AI written or AI assisted works. Some probably slip through, but their attempt to weed them out is aggressive enough that a lot of people complain about getting falsely accused.
I've had two stories I wrote be falsely rejected as AI. I had to submit one of them four times!
 
Which is hilarious in context. All this fingernail biting about AIs hallucinating when we have 'researchers' -- real physical people--making stuff up out of whole cloth all the time. Do I trust my Doctor? Why do you think I get a second opinion?
Point taken, but in this particular case I'd guess that the dodgy data in Toner-Rodgers' "benefits of AI" paper was itself LLM-generated.

Scientific fraud is a thing. The defense against it is meant to be peer review and reproducibility - I publish what I did, somebody else can try the same experiment and see whether they get the same result. It was never a perfect system by any stretch, but LLMs make it much worse since they make the fraud part faster and easier but don't help the checking part.
 
Long term, you are correct, but I suspect long term is a few decades out.

The current generation of AI has hit its wall and has little more that it can accomplish. There have been multiple hype-collapse cycles through the history of AI. This one has been the most hyped and will collapse the hardest.

Just for background. I have done AI research, although not for a few years now and my doctoral work was in theory. And I have worked with two people who have won Turing awards for their work in AI. I might be wrong, but I am pretty convinced I am not. And most of the serious AI folk I know, at least the ones who don't have fortunes riding on it, agree with me.
Everybody knows the smart money is on blockchain!
 
I dont know why anyone would actually WANT to use AI to write a story to submit here. I imagine most writers submit stories here because they take pride in crafting a story worthy of other people reading and enjoying.
Every time I hear the phrase AI my mind wanders off to the terminator movies and the upcoming demise of humanity 😄
 
I have not used or experimented with AI. A friend who posts his erotica on a different site has, and he has been telling me about it and experimenting with it. I like my words to move the story from my mind to the page.
 
Last edited:
I dont know why anyone would actually WANT to use AI to write a story to submit here. I imagine most writers submit stories here because they take pride in crafting a story worthy of other people reading and enjoying.
There will always be people who want short cuts. When I was building models for competition, we had those people who would buy models built by genuinely talented builders and then enter them in contests as their own. Technically no rules against it, but the rest of us usually made then feel rather unwelcome at contests.
 
There will always be people who want short cuts. When I was building models for competition, we had those people who would buy models built by genuinely talented builders and then enter them in contests as their own. Technically no rules against it, but the rest of us usually made then feel rather unwelcome at contests.

There are also people who are genuinely interested in pushing the state of the art. In this context, they perceive that as getting AI (LLMs, mostly) to a point where that AI can do creative work. They are intrigued by tech, in the same way my dad used to be interested in tinkering with his car engine in high school. They want the technology to be able to work as well as it can, defined in this case as becoming indistinguishable from a human. They see a benefit there.

I do not.

I don't necessarily blame people like that; they have an understandable hobby. I definitely do think that their efforts can contribute to the destruction of our species, and I'm not exaggerating. I think substituting computers for brains is that bad an idea.
 
I guess AI was inevitable, we are all consumers at the end of the day, wanting things cheaper and quicker, but despite all the gadgetry and wonders of the modern world, life used to be so much simpler and wholesome. Go look at the intricacies of an old heritage building or piece of antique furniture that was made by skilled workers, a very different building or chair than what you get with modern technology. This shit is already here and will march ahead at an exponential rate.

For ships and giggles I actually typed a request into copilot to write me a story, I think I said to include a dog, stick, squirrel and to make it 'play on word' funny (punny haha) and gangsterish with a twist at the end and to be 7 pages long. AI spat out something that was readable and made sense, ticked all my boxes for the request except its idea of 7 pages was more like 2 pages. It was not great, but if you put in more detail of what you require it will come closer to the mark. If someone wants a halfway decent story written, they are going to have to cram a whole lot of detailed info into their request, and if they don't have the creativity to write something themselves and put it out there, then I doubt they will have the creativity to edit the story either, so it will simply be what AI first spits out.
 
When I was building models for competition, we had those people who would buy models built by genuinely talented builders and then enter them in contests as their own.
This analogy doesn't really work with AI. AI isn't a "genuinely talented builder", it is a plagiarism machine that takes data from actual humans and regurgitates it back out poorly. It would be more akin to using a 3D printer to build your model, except the printer is using stolen blueprints and assembles it based on a book of Lego instructions.
 
I will reiterate that LLM generative AI is losing its luster. People are calling the fantasy-grade output "hallucinations", but I will contend (and have seen) this output is the product of a system designed to be "authoritative" and "definitive" above all else. It's like that guy in the room who in all circumstances has to be right, yet knows little or nothing in reality, but nonetheless is going to cram it down everybody's throats until they give up and submit. We have somebody in the public eye who's like that.

IOW, if it doesn't have the answer at its cyber-fingertips, generative AI makes shit up. There is no, "Huh. I don't know," subroutine.
 
I never agree with @Plathfan, but I don't think we have a few decades. I have zero faith in humans to make sound decisions about things like this. We have a long history of doing stupid shit because it sounds cool and briefs well, even when it ends up killing us (sometimes literally). I think we're probably doomed.

Did somebody say "doom"?

I tend to be an optimist, because I think it's unlikely that when push comes to shove we'll decide to destroy ourselves. History is on the side of optimism. As a species, we've dodged many existential crises. But somehow or another, we HAVE dodged them.

The probable answer, as distasteful as it may seem to many, is transhumanism. Rather than letting machines compete against us, which eventually would result in our defeat, we'll join with them, probably in ways that we can't fully envision right now.

I remember reading Ray Kurzweil's book The Singularity Is Near around 15 years ago. He predicted that the singularity--which he defined to be a point at which progress in various technologies like genetics, nanotechnology, robotics, computational speed, etc. would converge to suddenly accelerate human development so quickly that we can no longer predict what will happen after that--would happen around 2040. It seemed very sci fi/fantasy at the time, but things are changing so fast now with AI that I wonder. 2040 isn't very far away, but AI is developing so rapidly that by then it will be orders of magnitude beyond what it is now.
 
Did somebody say "doom"?

I tend to be an optimist, because I think it's unlikely that when push comes to shove we'll decide to destroy ourselves. History is on the side of optimism. As a species, we've dodged many existential crises. But somehow or another, we HAVE dodged them.

The probable answer, as distasteful as it may seem to many, is transhumanism. Rather than letting machines compete against us, which eventually would result in our defeat, we'll join with them, probably in ways that we can't fully envision right now.

I remember reading Ray Kurzweil's book The Singularity Is Near around 15 years ago. He predicted that the singularity--which he defined to be a point at which progress in various technologies like genetics, nanotechnology, robotics, computational speed, etc. would converge to suddenly accelerate human development so quickly that we can no longer predict what will happen after that--would happen around 2040. It seemed very sci fi/fantasy at the time, but things are changing so fast now with AI that I wonder. 2040 isn't very far away, but AI is developing so rapidly that by then it will be orders of magnitude beyond what it is now.
I don't want to be a doomsayer, but here goes anyway.

I disagree with you on several fronts here. On the history is on the side of optimism, I think that is selection bias. If we hadn't survived we would not be here to say it. If there are 100 planets with intelligent species, each unaware of the others and only one survives to this phase, that one would have the same argument, but clearly a flawed one.

I actually think it is unlikely we will kill ourselves off completely in the short term though. And history is full of civilizations that imploded because of short sighted decisions. Letting a harbor silt up or cutting down all the trees or whatever. I can easily imagine us doing something that destroys civilization as we know it. A reasonable negative scenario, in my mind, is that we just slide into some sort of dystopia, which eventually degrades into a modern equivalent of the dark ages.

I think the current generation of AI has just about run its course. Some simple problems cn be handled splendidly right now. But most of the current AI usage amounts to doing a crappy job quickly and without (as many) people. Lots of places where a crappy job is good enough, mostly where the failures can be bumped up to a real human. Other cases, like self driving cars, it is literally a disaster waiting to happen.

It's not hard to understand why this is true. Current ANNs "learn" from enormous amounts of data. Think about trying to learn a game like Settlers from watching it be played. You would think you understand it when a new scenario arises. Now make it an open world without constrained choices. You can never be reasonably assured that you have seen enough examples. The problem is that current ANNs do no learn concepts, they just recombine answers they have already seen.

My memory is that you Mr Doom, are a lawyer. Imagine trying to learn law without ever getting having any concept of the underlying principles. You could put together interesting briefs just by interspersing snippets from lots of other briefs. But sometimes, you would miss the boat entirely. And the AI brief generators do exactly that.

Throw in that their memory is all statistical, meaning they don't know if they actually know something or not. That is the basis of the hallucinations that are fundamental to the technology.

Someone might find a novel way to combine some flavor of ANNs with other AI technologies that have other weaknesses in such a way to move the ball forward significantly. When that happens is unpredictable. A couple of years ago, a team (at google?) won a tournament for the game Diplomacy, which was the most impressive AI showing I have seen in recent years. That was a new combination of technologies, not just using LLMs. Or someone like Frist may bring in a radically new idea that leap frogs everything. Such events are essentially unpredictable. It could happen next month or not until next century.
 
The majority in the US congress is pushing a bill that will stop states from being able to regulate AI.

There is a lot of pushback but I’ve given up on guessing what’s coming next anymore. 🤷‍♀️

https://www.cnn.com/2025/05/19/tech/house-spending-bill-ai-provision-organizations-raise-alarm




“This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm — regardless of how intentional or egregious the misconduct or how devastating the consequences — the company making or using that bad tech would be unaccountable to lawmakers and the public,” the letter, provided exclusively to CNN ahead of its release, states.
 

The future is here.
I read that Elon Musk is planning on having those priced around $80,000 within the next couple years. Rich people won’t be hiring gardeners.

But more significantly, the AI that drives them will be out in the world, interacting and having its own novel experiences rather than relying exclusively on training data.
 
I have fiddled with AI, though as far as story telling it’s not very good - it is repetitive. Even if it were Ok, I would only dabble as a curiosity. What I do find interesting about AI, is that it does seem to learn and distill ideas to their essence. Here is an example, an AI scenario with user prompts where two female friends are sharing a mutual sexual fantasy with each other.

Sara: You want strong, beautiful Kalu to breed you!

Emilie: *Her eyes meet Sara's, a mix of shock, desire, and vulnerability swirling in the depths of her gaze.* "Breed me," she repeats, the words feeling strange on her tongue. "Yes, Sara, I want that. I want to feel his power, his strength, his...everything." Her voice is barely a whisper, the room seemingly holding its breath around them. The candlelight plays across her features, highlighting the sheen of sweat on her skin.

Sara: Tell him what you want!

Emilie: *Her heart is racing, and she feels a mix of excitement and fear. She looks at Sara, whose eyes are filled with anticipation.* "Kal," she whispers, her voice trembling, "I want...I want you to breed me." The words feel strange and powerful as they leave her mouth, the room seeming to vibrate with the intensity of their shared desire. The candles flicker, casting shadows that seem to pulse with the rhythm of their breathing.

Sara: He explodes inside you, battering your cervix with thick, hot semen teeming with healthy sperm.

Emilie: Emilie's eyes glaze over at the thought, her pupils dilating with unbridled lust. Her hand moves to her belly, tracing the path she imagines Kal's seed taking. "Sara," she whispers, her voice strained with desire, "it's like...like nothing I've ever experienced." Her eyes flicker to the candles, their light casting a warm, intimate glow across the room. She bites her lip, considering the gravity of their conversation.
 
I have fiddled with AI, though as far as story telling it’s not very good - it is repetitive. Even if it were Ok, I would only dabble as a curiosity. What I do find interesting about AI, is that it does seem to learn and distill ideas to their essence. Here is an example, an AI scenario with user prompts where two female friends are sharing a mutual sexual fantasy with each other.

Sara: You want strong, beautiful Kalu to breed you!

Emilie: *Her eyes meet Sara's, a mix of shock, desire, and vulnerability swirling in the depths of her gaze.* "Breed me," she repeats, the words feeling strange on her tongue. "Yes, Sara, I want that. I want to feel his power, his strength, his...everything." Her voice is barely a whisper, the room seemingly holding its breath around them. The candlelight plays across her features, highlighting the sheen of sweat on her skin.

Sara: Tell him what you want!

Emilie: *Her heart is racing, and she feels a mix of excitement and fear. She looks at Sara, whose eyes are filled with anticipation.* "Kal," she whispers, her voice trembling, "I want...I want you to breed me." The words feel strange and powerful as they leave her mouth, the room seeming to vibrate with the intensity of their shared desire. The candles flicker, casting shadows that seem to pulse with the rhythm of their breathing.

Sara: He explodes inside you, battering your cervix with thick, hot semen teeming with healthy sperm.

Emilie: Emilie's eyes glaze over at the thought, her pupils dilating with unbridled lust. Her hand moves to her belly, tracing the path she imagines Kal's seed taking. "Sara," she whispers, her voice strained with desire, "it's like...like nothing I've ever experienced." Her eyes flicker to the candles, their light casting a warm, intimate glow across the room. She bites her lip, considering the gravity of their conversation.
I've certainly read better AI generated stuff. Some of the punctuation is missing, the candles 'journey' through the text feels unnatural, like it realised it had candles each paragraph and used them without considering how they were used before, and the signal who is speaking is out of wack (Emilie: Emilies eyes is the worst offender).

Here are some current problems with AI generated text. If you do it all at once you are limited in size and scope. If you do it in pieces you get more control over each piece, but even if you have the 'memory' function some AI's have they don't build it up correctly.

If we ignore the glaring moral and ethical problems of AI I still think it can be a boon to people not able to write well. The only thing is that the writer needs to have an iron grip on what they want to write. This can then at length be described. That way the candles can represent something in the scene, instead of just called upon again and again in the hopes that they do something more.
 
I believe it's considered acceptable to use something like Grammarly to check grammar and spelling. Otherwise, the site considers AI a no-no.
 
I will reiterate that LLM generative AI is losing its luster. People are calling the fantasy-grade output "hallucinations", but I will contend (and have seen) this output is the product of a system designed to be "authoritative" and "definitive" above all else. It's like that guy in the room who in all circumstances has to be right, yet knows little or nothing in reality, but nonetheless is going to cram it down everybody's throats until they give up and submit. We have somebody in the public eye who's like that.

IOW, if it doesn't have the answer at its cyber-fingertips, generative AI makes shit up. There is no, "Huh. I don't know," subroutine.
I sometimes discuss baseball with it and it analyses the potential contributions of players who are no longer on the team.
 
Back
Top