Why is everything labeled as AI now????? I can't even post a story anymore.

The market is a Ponzi scheme. That has no relevance to the real world. The market crashed in 2000 too after the tech boom. But the internet has still changed the world. AI is doing the same, and the market crashing around it will make no difference.
AI rejections? Check
AI avatar? Check
"AI is good, actually" ? Check
"Actually, what the site is approving now is the real slop" ? Check

Some questions answer themselves.
 
My callouts aren't very pleasant, I guess, and many people would rather treat Lit as their happy place and avoid hardasses like me.
Your callouts are vague and unhelpful. Offering a pat on the back to whoever you perceive as the downtrodden is not getting anyone published.
 
Hello, I'd like to ask a question as well. How can we detect AI in a text written in French without using a French-language AI screening tool? The sentence structure, grammar, and spelling are completely different. Furthermore, even without using a spell checker like Copilote or others that reject sentence changes, simply using text programs means that we're discovering AI in the code today. I sincerely think that, much to your dismay, you'll have to change your approach, unless you only receive handwritten and scanned texts, and even then, there are programs capable of reproducing handwriting. If you read a text, you quickly get a sense of the author's style and sensibility. And if their text is rubbish, then no one will give it good marks. I find this debate rich and interesting. Thank you and have a good day.
 
Hello, I'd like to ask a question as well. How can we detect AI in a text written in French without using a French-language AI screening tool? The sentence structure, grammar, and spelling are completely different. Furthermore, even without using a spell checker like Copilote or others that reject sentence changes, simply using text programs means that we're discovering AI in the code today. I sincerely think that, much to your dismay, you'll have to change your approach, unless you only receive handwritten and scanned texts, and even then, there are programs capable of reproducing handwriting. If you read a text, you quickly get a sense of the author's style and sensibility. And if their text is rubbish, then no one will give it good marks. I find this debate rich and interesting. Thank you and have a good day.
It can and does (or, at least, it should if I extrapolate what I know). Unfortunately, explaining the mechanism gives the game away in the same way that guiding someone's rejected work to publication would.

I'm sorry. I know that's not a good answer. I wish I could do better.
 
The market is a Ponzi scheme. That has no relevance to the real world.
No, it isn't. It is a measure of expectations for future profits that are based on real-world activity. Invest in the right stocks, and the gains are real.

A Ponzi scheme makes no profit. Invest in one of those, and the returns, if you are lucky to get any, are the money that somebody else put in.
 
Your callouts are vague and unhelpful. Offering a pat on the back to whoever you perceive as the downtrodden is not getting anyone published.
Am I supposed to instead play the detective and gatekeeper as you do? I realize that not everyone who complains here is truthful about how they write. But I also realize I can't tell who is and who isn't from a mere glance and a couple of posts here, so I'd rather treat all such posts as legit until proven otherwise.

You know, the old presumption of innocence? It's how modern society works most of the time. But you would rather treat those people as guilty until proven otherwise. That's the choice you're making because of your bias towards Lit and Laurel, the bias you already admitted having. How is my approach worse than yours?
 
No, it isn't. It is a measure of expectations for future profits that are based on the real world. Invest in the right stocks, and the gains are real.

A Ponzi scheme makes no profit. Invest in one of those, and the returns, if you are lucky to get any, are the money that somebody else put in.
As far as I understand it, that's exactly how some of the companies operate.

You're "not-wrong" that buying and selling stock shares doesn't work that way, but in this case, it's buying and selling shares in Ponzi schemes.
 
It can and does (or, at least, it should if I extrapolate what I know). Unfortunately, explaining the mechanism gives the game away in the same way that guiding someone's rejected work to publication would.

I'm sorry. I know that's not a good answer. I wish I could do better.
Unfortunately, it doesn't always work, and I'm experiencing that firsthand right now since all my texts are being rejected and I can't find a publisher to review them. But thank you for taking the time to read my post.
 
Am I supposed to instead play the detective and gatekeeper as you do? I realize that not everyone who complains here is truthful about how they write. But I also realize I can't tell who is and who isn't from a mere glance and a couple of posts here, so I'd rather treat all such posts as legit until proven otherwise.

You know, the old presumption of innocence? It's how modern society works most of the time. But you would rather treat those people as guilty until proven otherwise. That's the choice you're making because of your bias towards Lit and Laurel, the bias you already admitted having. How is my approach worse than yours?
Because mine results in some of those people getting published, the thing they wanted in the first place, and yours just makes more people mad.

I am biased toward the site, and biased toward treating rejections with baseline skepticism. From there, once you get them talking, people usually tell you who they are pretty quickly.
 
The market is a Ponzi scheme. That has no relevance to the real world. The market crashed in 2000 too after the tech boom. But the internet has still changed the world. AI is doing the same, and the market crashing around it will make no difference.
We will see…

The internet was an idea whose time had come and an obvious communal good (before it got enshitified). AI is a way to make more money for those who already have undreamed of riches. It being done to us, not for us.
 
I'm experiencing that firsthand right now since all my texts are being rejected
The situation is absolutely brutal and appears to be getting worse. Sorry to hear you are experiencing this. Many of us on the same boat. Right now, my solution may have to be to search for LE alternatives.
 
Unfortunately, it doesn't always work, and I'm experiencing that firsthand right now since all my texts are being rejected and I can't find a publisher to review them. But thank you for taking the time to read my post.
I am sorry. I remember you saying your work was in French. I'm double extra not capable of helping in a language I don't speak (let alone have mastery of).

You're right that it's not perfect. I'm sorry that you're one of the fringe cases. It sucks that this has to happen at all.
 
Am I supposed to instead play the detective and gatekeeper as you do? I realize that not everyone who complains here is truthful about how they write. But I also realize I can't tell who is and who isn't from a mere glance and a couple of posts here, so I'd rather treat all such posts as legit until proven otherwise.

You know, the old presumption of innocence? It's how modern society works most of the time. But you would rather treat those people as guilty until proven otherwise. That's the choice you're making because of your bias towards Lit and Laurel, the bias you already admitted having. How is my approach worse than yours?
More importantly, you dismiss what you don't like out of hand. I know full well that you've never read my AI rejections help desk thread. At first I just thought you were being obtuse, but now you've got a half-cocked thread of your own and not once, anywhere, from anyone, does it include the only advice Literotica has ever given on AI rejections. I give that advice to everyone (and it works about 30% of the time), so if you'd read it you'd know that the first thing anyone should do after receiving a rejection is to resubmit with a note, but you're all too busy chasing wild theories and making mountains out of molehills. Crank behavior that is uninterested in being productive.
 
Because mine results in some of those people getting published, the thing they wanted in the first place, and yours just makes more people mad.
That's not true. I tried giving my best advice to everyone. I also pushed forward an idea of creating a self-help guide, offering our best guesses about how to fix their problems. I maintain my position that such a guide would be helpful only to those who actually wrote (most of) their stories. It's needless to say how little traction that idea had here, in this den of selfishness.

I am biased toward the site, and biased toward treating rejections with baseline skepticism. From there, once you get them talking, people usually tell you who they are pretty quickly.
See, so you ARE playing a detective. Based on your hunches and impressions about who's truthful and who isn't, you offer or withdraw help. It's great when one can trust their instincts so deeply.
 
That's not true. I tried giving my best advice to everyone. I also pushed forward an idea of creating a self-help guide, offering our best guesses about how to fix their problems.
Was the cost of making a thread of your own, of your own volition and without a co-signator, too high? Are we rationing threads now? We have to collectively decide to help others rather than just fucking doing it? What would that even look like?

Oh wait.
 
but now you've got a half-cocked thread of your own and not once, anywhere, from anyone, does it include the only advice Literotica has ever given on AI rejections. I give that advice to everyone (and it works about 30% of the time), so if you'd read it you'd know that the first thing anyone should do after receiving a rejection is to resubmit with a note, but you're all too busy chasing wild theories and making mountains out of molehills. Crank behavior that is uninterested in being productive.
Are you kidding me now? I've always seen that advice as just another thing that we here suggest as something that might or might not work, but I've certainly never seen that as official advice. I had to check, and no, it's not in Lit's FAQ about AI, nor does it feature in Lit's rejection notice, which I'll paste here now

Dear Writer,
Thank you for your submission to Literotica. We appreciate the time and effort you've taken to write a story and submit it to our site. However, we've found that we cannot post your submission in its current form. The checklist below may help you in re-examining your manuscript.
Are you using Grammarly, ProWritingAid, Quillbot or similar software, or allowing Microsoft Word grammar check to change your words? Many modern writing packages incorporate AI. Literotica is a storytelling community centered on the sharing of human fantasies. While we do not have a policy against using tools to help with the writing process (i.e. sp, grammar check, etc.), we do ask that all work published on Literotica be created primarily by a human.
If you are using a grammar check program to review your work so that you can make changes (as a spellcheck, to flag punctuation, review grammar, and/or occasionally as a thesaurus), that should be fine. If you are allowing a grammar check program to “rewrite” your words or rephrase your text, that may cross the line into AI generated text/stories (since substantial parts of the final draft may not be written by you). Please see this FAQ for more information: https://literotica.com/faq/publishing/publishing-ai
NOTE: the sentence at the end of this response [[Please feel free to re-submit the story after a Volunteer Editor has examined it, or after you've made revisions.]] does not apply to stories rejected for content or AI issues. Volunteer Editors can help only with grammar, punctuation, and story mechanics issues and are not equipped to deal with AI issues. You may resubmit after you’ve made revisions.


Where does it say resubmit with a note? All it says here, in the freaking official website's advice, is that volunteer editors, in other words, people like you, are not equipped to deal with the AI rejection. The irony, eh?

The only place where that advice exists is Literotica's sticky post in this forum, dated July 24, 2025. Months, years after this whole mess started. So forgive me for not knowing it was official advice.
 
Was the cost of making a thread of your own, of your own volition and without a co-signator, too high? Are we rationing threads now? We have to collectively decide to help others rather than just fucking doing it? What would that even look like?

Oh wait.
See, this is where we differ a lot. YOU have no problem assuming the role of a gatekeeper, having a help thread of your own where you alone decide who's genuine and who's not. You have the arrogance to think that you alone know best.

I, on the other hand, feel inadequate to figure out such advice on my own. I certainly don't have enough experience or knowledge on how Laurel's algorithm works to figure out what the best approach would be. All things considered, such an approach would require collective wisdom, yours included.

I thank Shelby for trying, and everyone else who chimed in. It wasn't nearly enough for me to truly consider offering it as something that rejected authors should invest their time and effort in, even if I did link it once already, reluctantly so. I doubt I'll do that again.

But that's good for you, no? You alone are back as the sole arbiter of those worthy of help, against Laurel's own advice.

Instead of having a go at me and putting a negative connotation on all my responses to rejected authors, maybe take a look at the role you assumed from the outside. I don't doubt your intentions, but you are definitely too assured of your own capacity to judge.
 
Where does it say resubmit with a note? All it says here, in the freaking official website's advice, is that volunteer editors, in other words, people like you, are not equipped to deal with the AI rejection. The irony, eh?
I have demonstrably helped people. You understanding how I did that is not required.
The only place where that advice exists is Literotica's sticky post in this forum, dated July 24, 2025. Months, years after this whole mess started. So forgive me for not knowing it was official advice.
Are you trying to insult me for paying closer attention than you do?
 
See, this is where we differ a lot. YOU have no problem assuming the role of a gatekeeper, having a help thread of your own where you alone decide who's genuine and who's not. You have the arrogance to think that you alone know best.
You know, full well, that I wrestled with this for months before saying anything publicly. Months. July of 2024, I told you, and I was going out of my mind about it. You know full well that not saying anything, and watching everyone fret in ignorance, was killing me.

I'm a doer. This is what I can do.

I am a limited resource. I cannot help everyone, and I make no apologies for looking out for my own mental health.
 
After reading through all the replies, I think it helps to step back and summarize what the real disagreement actually is, because it doesn’t seem like people are as far apart as it first looks but still everyone is pushing a side.


Most of us agree on the basic goal: AI-generated submissions flooding the site are a problem, and some form of detection or filtering is necessary. That part isn’t controversial. The problem is how AI detection is currently being treated and what happens after something is flagged.

Right now, automated AI detection tools are being used as if they’re decisive proof rather than rough indicators. That’s where things start to break down. These tools are widely known including by the people who build them to produce false positives. They can’t reliably determine authorship, especially when the writing is polished, consistent, traditionally structured, or heavily edited by a human. Since AI models were trained on massive amounts of human writing, overlap in style, grammar, and narrative flow is unavoidable.


The result is that skilled, careful, or experienced writing can end up looking “suspicious,” while lower-effort or messier writing sometimes passes without issue. That’s a perverse incentive structure. A system where “write worse” becomes implicit advice is not a healthy outcome for a writing platform.


Another major issue is the lack of transparency. When a story is flagged or delayed for AI concerns, authors are often given little to no actionable feedback. There’s no explanation of what triggered the flag, no indication of confidence level, and no guidance on what would actually resolve the issue. That turns revision into blind trial-and-error. Writers are left guessing whether the problem is sentence structure, vocabulary, pacing, editing tools, formatting, or something else entirely !

This uncertainty is compounded by long review delays and silence. For many authors, especially those with a history on the site, that combination is demoralizing. It kills momentum and makes people question whether it’s worth continuing to write at all. And that’s not because they want to use AI it’s because they can’t tell what standard they’re being judged against !

It’s about the reality that false positives exist and that the current process doesn’t give legitimate writers a fair way to address them.


The suggestion that writers should “just change their style” also doesn’t really hold up as a solution. Writers shouldn’t have to deliberately degrade their work to avoid detection. Many authors have established voices, ongoing series, or reader expectations built over time. Asking them to fundamentally change how they write without even knowing what they’re supposed to change isn’t reasonable. And even then, there’s no guarantee it would work, because the detection criteria aren’t transparent AT ALL !


If the goal is to protect the platform while still supporting real writers, there are more balanced approaches. Automated detection should be treated as a screening tool, not a final verdict. Stories flagged for AI concerns should receive human review before rejection. When something is flagged, authors should receive clear, actionable feedback rather than vague or generic responses. An appeal or verification process for established authors would also go a long way toward reducing frustration. Clear guidance on what tools are allowed such as grammar checkers or editing software would remove a lot of uncertainty.

At the end of the day, a system that discourages genuine writers while still failing to fully stop determined AI posters isn’t serving anyone well. Protecting the site matters, but so does maintaining trust with the people who actually contribute to it. If human writing skill becomes a liability instead of an asset, that’s a sign the balance needs adjusting.
 
After reading through all the replies, I think it helps to step back and summarize what the real disagreement actually is, because it doesn’t seem like people are as far apart as it first looks but still everyone is pushing a side.
Not sure that all are on one side or another.

Most of us agree on the basic goal: AI-generated submissions flooding the site are a problem, and some form of detection or filtering is necessary. That part isn’t controversial. The problem is how AI detection is currently being treated and what happens after something is flagged.
Agree.

Right now, automated AI detection tools are being used as if they’re decisive proof rather than rough indicators.
We don't know that.

The result is that skilled, careful, or experienced writing can end up looking “suspicious,” while lower-effort or messier writing sometimes passes without issue. That’s a perverse incentive structure.
If so, yes.

Another major issue is the lack of transparency. When a story is flagged or delayed for AI concerns, authors are often given little to no actionable feedback. There’s no explanation of what triggered the flag, no indication of confidence level, and no guidance on what would actually resolve the issue. That turns revision into blind trial-and-error. Writers are left guessing whether the problem is sentence structure, vocabulary, pacing, editing tools, formatting, or something else entirely !
Unfortunately, yes. But I am not sure it is feasible for anything better to happen.

This uncertainty is compounded by long review delays and silence. For many authors, especially those with a history on the site, that combination is demoralizing.
I think it affects any author here. But I am not sure that the long review delays are related to AI. If they are, then it can only be if a human is looking at the results, which contradicts your assertion that the process is fully automated.

It’s about the reality that false positives exist and that the current process doesn’t give legitimate writers a fair way to address them.
What process do you propose?

If the goal is to protect the platform while still supporting real writers, there are more balanced approaches.
I think that is the goal. So what do you propose?

Automated detection should be treated as a screening tool, not a final verdict.
I don't think it is fully automated.

Clear guidance on what tools are allowed such as grammar checkers or editing software would remove a lot of uncertainty.
The standard AI rejection message lists several tools. However, using such tools does not guarantee rejection.
 
Most of us agree on the basic goal: AI-generated submissions flooding the site are a problem, and some form of detection or filtering is necessary. That part isn’t controversial. The problem is how AI detection is currently being treated and what happens after something is flagged.
I think this paragraph over-simplifies the problem. I have personally clicked on about a dozen author's catalogs who've complained about struggling with AI rejections (either publicly on the forum or to me directly). NONE of those authors were producing anything even close to obvious AI slop. Zero.

Of course we need a basic gateway system that detects the obvious crap in order to avoid AI flooding. But it should ONLY detect the obvious crap. We should all be able to look at what is being rejected and have 95% agreement that yes, this is AI slop.

Right now we have the opposite. We have a system that is allowing a tsunami of crap through, and flagging good writers with dubious AI flags. If that system was being proposed, nobody would agree that it would be a good idea. It's only being defended now by some people because it's already in place.
 
Back
Top