Why is everything labeled as AI now????? I can't even post a story anymore.

Le débat proposé reste constructif, mais une question se pose pour les non-anglophones : comment résoudre le problème initial, puisque les correcteurs, par exemple dans mon cas, ne sont pas francophones, et que l’IA, avec certaines formulations, les considère comme de faux positifs ? Je l’ai vérifié avec mes propres écrits en les vérifiant… en fait, je me suis permis de croire que mon intelligence était humaine et j’ai découvert qu’une partie de moi était artificielle 😄
 
I don't normally respond to every part of messages like this, but sometimes it is important.
After reading through all the replies, I think it helps to step back and summarize what the real disagreement actually is, because it doesn’t seem like people are as far apart as it first looks but still everyone is pushing a side.


Most of us agree on the basic goal: AI-generated submissions flooding the site are a problem, and some form of detection or filtering is necessary. That part isn’t controversial. The problem is how AI detection is currently being treated and what happens after something is flagged.

Right now, automated AI detection tools are being used as if they’re decisive proof rather than rough indicators.
This is not a meaningful deciding point. The site has tools at it's disposal, and it uses them to the best of their abilities.
That’s where things start to break down. These tools are widely known including by the people who build them to produce false positives.
Be aware that Lit's AI Detector is not an off the shelf solution. It is not the BlackWall, an AI-to-stop-AI solution, for my chooms out there. It is something else.

It does generate false positives. I can't easily explain the nature of this, but they are possible. Even then, the false positives are (almost always) evidence of something else.
They can’t reliably determine authorship, especially when the writing is polished, consistent, traditionally structured, or heavily edited by a human.
This is incorrect. This is not the crux of false positive generation.
Since AI models were trained on massive amounts of human writing, overlap in style, grammar, and narrative flow is unavoidable.
This is incorrect, see above.
The result is that skilled, careful, or experienced writing can end up looking “suspicious,” while lower-effort or messier writing sometimes passes without issue.
This is incorrect, see above, and moreover this is what happens when extrapolating from incorrect hypotheses. You (and many others) spin out into increasingly fantastic and hysterical understandings of the root problem and what can be done about it.
That’s a perverse incentive structure. A system where “write worse” becomes implicit advice is not a healthy outcome for a writing platform.
This is incorrect. Lit's AI detector is not triggered by typos or a lack of typos.
Another major issue is the lack of transparency. When a story is flagged or delayed for AI concerns, authors are often given little to no actionable feedback.
Minor correction: the pending bug was unrelated. Stories that suffered from lengthy pending periods were no more or less likely to ultimately end in rejections. This is not to say that transparency is not an issue.
There’s no explanation of what triggered the flag, no indication of confidence level, and no guidance on what would actually resolve the issue. That turns revision into blind trial-and-error. Writers are left guessing whether the problem is sentence structure, vocabulary, pacing, editing tools, formatting, or something else entirely !
This is correct (though of the listed problems the only accurate entry is editing tools). It is a black box, and must remain so.
This uncertainty is compounded by long review delays and silence. For many authors, especially those with a history on the site, that combination is demoralizing. It kills momentum and makes people question whether it’s worth continuing to write at all. And that’s not because they want to use AI it’s because they can’t tell what standard they’re being judged against !
This is correct, and highly regrettable.
It’s about the reality that false positives exist and that the current process doesn’t give legitimate writers a fair way to address them.
This is correct and regrettable in the extreme.
The suggestion that writers should “just change their style” also doesn’t really hold up as a solution. Writers shouldn’t have to deliberately degrade their work to avoid detection. Many authors have established voices, ongoing series, or reader expectations built over time. Asking them to fundamentally change how they write without even knowing what they’re supposed to change isn’t reasonable. And even then, there’s no guarantee it would work, because the detection criteria aren’t transparent AT ALL !
That is correct, and highly regrettable advice that literally nobody should be giving. It won't work.
If the goal is to protect the platform while still supporting real writers, there are more balanced approaches. Automated detection should be treated as a screening tool, not a final verdict.
Literotica only has two public facing employees; Laurel and Manu. Automated screening tools are the first line of defense, which is why, before you go making any changes to your document, you should resubmit with a note in the "Notes to the Admin" section of the submissions page attesting that no AI was used to generate your story. This triggers the manual review.

Stories flagged for AI concerns should receive human review before rejection. When something is flagged, authors should receive clear, actionable feedback rather than vague or generic responses. An appeal or verification process for established authors would also go a long way toward reducing frustration.
This is correct, and if you follow the guidelines (which are obtuse but are where most of my information came from) that is what will happen.
Clear guidance on what tools are allowed such as grammar checkers or editing software would remove a lot of uncertainty.
This is more difficult, and a moving target. Grammarly is a good example of a product that was fantastic three years ago and which will completely fuck up your story today, if you give it half a chance. The technology here is moving too fast to keep up with what's safe and what isn't.

In lieu of this follow the advice of "Write it yourself. Edit it yourself,or with the help of a volunteer editor"
At the end of the day, a system that discourages genuine writers while still failing to fully stop determined AI posters isn’t serving anyone well. Protecting the site matters, but so does maintaining trust with the people who actually contribute to it. If human writing skill becomes a liability instead of an asset, that’s a sign the balance needs adjusting.
It does need adjusting, and more transparency would definitely be a bonus. However, I have not seen evidence of slop getting through. The AI detector is very aggressive (arguably too aggressive). It would be easy and understandable to look at published stories you don't like, or that aren't up to your standard, and find flaw with Lit's AI Detector, but I haven't seen evidence of that.
 
Not sure that all are on one side or another.


Agree.


We don't know that.


If so, yes.


Unfortunately, yes. But I am not sure it is feasible for anything better to happen.


I think it affects any author here. But I am not sure that the long review delays are related to AI. If they are, then it can only be if a human is looking at the results, which contradicts your assertion that the process is fully automated.


What process do you propose?


I think that is the goal. So what do you propose?


I don't think it is fully automated.


The standard AI rejection message lists several tools. However, using such tools does not guarantee rejection.
Fair points, and I appreciate you engaging with this carefully. Let me clarify a few things, because some of what I’m describing is less about asserting how the system definitively works, and more about how it functions from the author’s side.


You’re right that we don’t know for certain whether the process is fully automated, partially automated, or how much human review is involved. That’s actually part of the issue I’m trying to get at. From the outside, the process is opaque enough that authors can’t tell which parts are automated, which parts are human, and what weight any AI signal is given. So when I say it’s “being treated as decisive proof,” I’m describing the outcome, not claiming knowledge of the internal workflow. If a flag results in long delays, rejection, or repeated resubmissions without explanation, it effectively functions as decisive from the author’s perspective.


Same with review delays. You’re absolutely right that long delays have existed before AI and affect everyone. I’m not claiming AI is the sole cause. The point is that when AI concerns are added on top of an already slow process without additional communication the uncertainty compounds. Whether delays are caused by human review, AI review, or staffing limitations, the end result for authors is the same: silence, ambiguity, and no clear path forward.


On the “perverse incentive” point I agree with your “if so” qualifier. I can’t prove this is happening system-wide. What I can say is that multiple authors are independently reporting the same pattern: cleaner, more formal, or more traditionally edited writing being flagged while rougher work passes. Even if that’s anecdotal, it’s still a signal worth paying attention to, because perception matters when it affects behavior. If writers start believing that polish is a liability, that’s already a problem regardless of intent.


As for feasibility: I agree that perfect transparency or detailed feedback on every submission may not be realistic. I’m not arguing for line-by-line explanations. What I’m arguing for is some middle ground between total opacity and full disclosure even something as simple as clearer categories (“structure,” “style consistency,” “editing artifacts,” etc.) or a basic indication of whether a flag was high-confidence or borderline. Right now, authors can’t even tell whether changing anything would help.


You asked what process I propose. Not a dramatic overhaul mostly guardrails:

1. Treat AI signals as indicators rather than automatic stop signs.
2. Ensure that AI-flagged submissions receive at least one human decision before rejection.
3. Provide minimal but actionable feedback when AI concerns arise.
4. Consider an appeal or verification path for established authors with a posting history.

None of that requires revealing detection methods or guaranteeing acceptance. It just gives legitimate writers a way to respond in good faith.

Regarding allowed tools: yes, the rejection message lists several, and you’re correct that using them doesn’t guarantee rejection. But that still leaves authors in a gray zone where they’re told tools are allowed, yet can’t tell whether those tools contributed to a flag in their specific case. That uncertainty is what people are reacting to, not the existence of rules themselves.

So to be clear, I’m not accusing the platform of bad intentions or claiming to know exactly how the system works internally. I’m pointing out that from the author’s point of view, the combination of AI flags, lack of feedback, and long delays creates a process that feels arbitrary and discouraging even if that’s not the intent. Saying that i can also say i got 8 stories rejected just today, that i have been working on for the past year.. yey me


I think we actually agree on the goal. Where I’m pushing is that improving communication and process clarity would help achieve that goal without weakening moderation. So until that happens the rejections are just going to get worse over time, until even the bad writing will be labeled as AI.
 
The result is that skilled, careful, or experienced writing can end up looking “suspicious,” while lower-effort or messier writing sometimes passes without issue. That’s a perverse incentive structure. A system where “write worse” becomes implicit advice is not a healthy outcome for a writing platform.
I disagree. I've never seen any AI-generated fiction writing approaching story length that comes close to "skilled, careful or experienced writing".

Because "skilled, careful or experienced" human writing has depth and an understanding of sounds and rhythms, figures of speech and character. It's about more than just knowing grammar and punctuation. That, if you'll forgive me, is the bare minimum that you should expect from someone with pretensions or ambitions to write fiction.
 
Je ne suis pas d'accord. Je n'ai jamais vu de récit de fiction généré par IA d'une longueur comparable à celle d'un auteur « compétent, soigné ou expérimenté ».

Car l'écriture humaine, « habile, soignée ou expérimentée », possède une profondeur et une compréhension des sonorités et des rythmes, des figures de style et des personnages. Il ne s'agit pas simplement de maîtriser la grammaire et la ponctuation. C'est, si vous me permettez l'expression, le strict minimum que l'on puisse attendre de quelqu'un qui prétend écrire de la fiction.
« Si vous commencez par des certitudes, vous finirez par des doutes. Si vous commencez par des doutes, vous finirez par des certitudes. » Francis Bacon
 
I disagree. I've never seen any AI-generated fiction writing approaching story length that comes close to "skilled, careful or experienced writing".

Because "skilled, careful or experienced" human writing has depth and an understanding of sounds and rhythms, figures of speech and character. It's about more than just knowing grammar and punctuation. That, if you'll forgive me, is the bare minimum that you should expect from someone with pretensions or ambitions to write fiction.
I don’t disagree with your description of what skilled human writing is. Depth, rhythm, voice, and character awareness matter far more than clean grammar alone (y)


Where I disagree is the leap from “I haven’t personally seen AI writing at that level” to treating that as evidence that it doesn’t exist or isn’t relevant. People often don’t know when they’ve read AI-assisted or AI-generated material unless it’s obvious or disclosed.

More importantly, my point wasn’t that AI routinely produces great fiction. It was about detection outcomes. A system can generate false positives even if the AI output itself isn’t very good, because the criteria being flagged don’t necessarily map to human judgments of literary quality.

So when authors report being flagged while weaker work passes, that doesn’t mean AI is writing better it means the detection signals don’t line up cleanly with craft. And the “write worse” idea isn’t advice, it’s a symptom of uncertainty caused by a lack of actionable feedback.

I agree with you on what good writing is. I just don’t think personal reading experience settles how detection systems behave.
 
Have no idea what you just said but ...Cool :unsure::cool:😁
“If you start with certainties, you end with doubts. If you start with doubts, you end with certainties.”

The progress of AI is exponential; its only current limitation is technical, but data centers are becoming increasingly powerful and will very soon surpass humans in some cases, except in their capacity for imagination, and God knows I have plenty of that.
 
You asked what process I propose. Not a dramatic overhaul mostly guardrails:

1. Treat AI signals as indicators rather than automatic stop signs.
2. Ensure that AI-flagged submissions receive at least one human decision before rejection.
3. Provide minimal but actionable feedback when AI concerns arise.
4. Consider an appeal or verification path for established authors with a posting history.
All fair.

#1 and #2 should happen, but I suspect they already do, which is why rejection for perceived AI use is slow.

#3 sounds great, but I am not sure what form it might take. Perhaps 'your story scored xx% on the AI-sniffer'.

#4 there is an appeal path: message Laurel.

The question remains: What is it about your writing that triggers the AI filter?
 
Last edited:
Le débat proposé reste constructif, mais une question se pose pour les non-anglophones : comment résoudre le problème initial, puisque les correcteurs, par exemple dans mon cas, ne sont pas francophones, et que l’IA, avec certaines formulations, les considère comme de faux positifs ? Je l’ai vérifié avec mes propres écrits en les vérifiant… en fait, je me suis permis de croire que mon intelligence était humaine et j’ai découvert qu’une partie de moi était artificielle 😄
Malheureusement, je crois que ce site est fichu pour tous ceux qui traduisent leur histoire en anglais avec un logiciel.

C'est injuste, mais c'est la vie.
 
Malheureusement, je crois que ce site est fichu pour tous ceux qui traduisent leur histoire en anglais avec un logiciel.

C'est injuste, mais c'est la vie.
En fait je ne traduis pas .on texte il est en français... je laisse Google traduction le traduire pour moi pour ceux qui voudront le lire
 
En fait je ne traduis pas .on texte il est en français... je laisse Google traduction le traduire pour moi pour ceux qui voudront le lire
C'est bien là le problème. La traduction automatique par Google signale les articles comme étant générés par l'IA.
 
“If you start with certainties, you end with doubts. If you start with doubts, you end with certainties.”

The progress of AI is exponential; its only current limitation is technical, but data centers are becoming increasingly powerful and will very soon surpass humans in some cases, except in their capacity for imagination, and God knows I have plenty of that.
Actually the underlying technology for LLM's has already nearly reached its limits, merely asymptotically approaching a ceiling already in reach. It;s fundamental to the technology. I can point you at a large number of highly qualified AI researchers who have independently reached this conclusion. Any technology has only so far it can be pushed. Despite what PT Barnum Sam Altman claims, his technology is there already. He has been told this by his own people. He's either self-delusional at this point, or merely trying to buy time to make sure someone else is left holding the bag while he retires with his billions. My money is on the latter. Everything now is just a Ponzi schema, with today's suckers trying to convince tomorrow's suckers to get them off the hook.

Ask Ray Kurzweil about how his famous predictions about exponential technologies have gone. (Hint, he denies them all now, despite writing a whole book around them.)
 
Last edited:
Everything is asymptotic until the next breakthrough. Then exponential again.
What is also exponential is the lag between breakthroughs. It used to be 1000 years, then 100, then 10.
 
I cannot think of any company that does this. Feel free to correct me.
Well, you decide. Maybe "ponzi scheme" isn't perfectly the right way to describe this, but it isn't far off. There is just as much handwaving, obscuration and mislabeling of revenue and reinvestment going on as there is in an actual Ponzi scheme. It's just as much of a shell game, just as dishonest, and just as much "value" is going to disappear from shareholder pockets and venture capital asset ledgers when it hits the fan and the bubble bursts.

But see for yourself what it is the companies are doing. Hank Green is not some uninformed conspiracy kook, this guy doesn't vlog about stuff he can't back up. And be free to inform me if you find that "Ponzi scheme" seems to you like a bad-faith way to describe it.
 
It does generate false positives. I can't easily explain the nature of this, but they are possible. Even then, the false positives are (almost always) evidence of something else.
Of what?

What is it that's going unstated here?
 
Maybe "ponzi scheme" isn't perfectly the right way to describe this
certainly, what happened during the first internet boom was a Ponzi scheme. People like Mark Cuban sold companies that had zero value (like broadcast.com) for five billion dollars. yahoo was left holding the bag when they ran out of suckers, and eventually the suckers who invested in yahoo lost all their money. if that's not a Ponzi scheme, I don't know what is.
 
What is it that's going unstated here?
nothing. @AwkwardMD is trying to sound mysterious because he is making this bizarre claim to having some inside knowledge due to his having "reverse engineered" the LE AI detection system.

All nonsense.

All detection systems generate false positives, false negatives, true positives and true negatives. Your system can be either very sensitive (very good at detection true positives) or very specific (very good at detection true negatives).

NO AI detection system right now is any good. There is no such thing. Most AI detection systems claim sensitivity and specificity in the high 70s and low 80s, which is very poor. However, this is on unmodified AI text. Most people who are trying to pass off their stories as being written by humans would modify them, slightly. Even one run through a humanizing software will dramatically drop the true detection rate. Even a tiny bit of human editing beyond that and you are approaching 50/50 rates. In other words, no better than chance alone.

Relying on such technology is irresponsible, absurd, and unethical.
 
Well, you decide. Maybe "ponzi scheme" isn't perfectly the right way to describe this, but it isn't far off. There is just as much handwaving, obscuration and mislabeling of revenue and reinvestment going on as there is in an actual Ponzi scheme. It's just as much of a shell game, just as dishonest, and just as much "value" is going to disappear from shareholder pockets and venture capital asset ledgers when it hits the fan and the bubble bursts.

But see for yourself what it is the companies are doing. Hank Green is not some uninformed conspiracy kook, this guy doesn't vlog about stuff he can't back up. And be free to inform me if you find that "Ponzi scheme" seems to you like a bad-faith way to describe it.
What Hank Green is describing is disturbing, and something that I was aware of.

But Ponzi schemes are built on nothing. Whereas all of the companies mentioned exist and have some value, over-inflated perhaps, but they have value.
 
certainly, what happened during the first internet boom was a Ponzi scheme. People like Mark Cuban sold companies that had zero value (like broadcast.com) for five billion dollars. yahoo was left holding the bag when they ran out of suckers, and eventually the suckers who invested in yahoo lost all their money. if that's not a Ponzi scheme, I don't know what is.
Company X may not be worth as much as Y paid for it, but that is not a Ponzi scheme. It simply labels the buyers as fools.
 
What Hank Green is describing is disturbing, and something that I was aware of.

But Ponzi schemes are built on nothing. Whereas all of the companies mentioned exist and have some value, over-inflated perhaps, but they have value.
I guess. But there's an awful lot of "nothing" there too. And people don't understand how much.
 
Back
Top