Why is everything labeled as AI now????? I can't even post a story anymore.

I understand that you are frustrated, but that is a ridiculous proposition.

Effectively, you have just accused anyone who has had a story published of not writing thoughtfully, of editing carelessly, and of not having a recognisable voice. I hope that was not your intention.

In any case, the idea that a detector is more likely to falsely identify a thoughtful, well-edited story, with a recognisable voice, as AI-generated makes no sense when the LLM's are built on published content is illogical.
All of this ☝️

I believe my stories have above average care put into them. I get anxiety if I find one typo. I’ve had people say they’d recognize the author even if my name wasn’t at the top. And I write thoughtfully.

I object to being told all of the above isn’t real just because I’ve never been pinged for AI. Turning on other authors is not a smart move IMO.
 
I understand that you are frustrated, but that is a ridiculous proposition.

Effectively, you have just accused anyone who has had a story published of not writing thoughtfully, of editing carelessly, and of not having a recognisable voice. I hope that was not your intention.

In any case, the idea that a detector is more likely to falsely identify a thoughtful, well-edited story, with a recognisable voice, as AI-generated makes no sense when the LLM's are built on published content is illogical.


That’s not what I’m saying at all, and I don’t think that interpretation follows from my argument.

I’m not accusing published authors of writing carelessly or without voice. I’m saying the opposite: that thoughtful, well-edited writing is more likely to resemble the kind of high-quality material LLMs were trained on. That doesn’t diminish human authorship it highlights the limitation of detectors that rely on pattern similarity rather than provenance.

The issue isn’t logic, it’s scope. If LLMs are trained on published, edited, voice-driven human writing, then detectors that flag similarity to that corpus are necessarily at risk of false positives especially for experienced writers. That doesn’t mean all such writing will be flagged, but it does mean some inevitably will be.

So the concern isn’t that “good writing equals AI,” but that current detectors can’t reliably tell the difference between human writing that influenced AI and AI output influenced by humans. Treating those tools as decisive rather than probabilistic is where the problem lies.

I’m not arguing against standards or moderation only against overconfidence in systems that aren’t designed to make definitive authorship claims.:)
 
Yes, obviously; I never implied otherwise.

What I did try to imply, though, is that a certain degree of originality in one’s writing can help to avoid getting mislabeled by those admittedly faulty detectors.

Just bear in mind we don’t actually know whether this also matters for the algorithm Lit uses, as the rumors indicate the analysis they conduct is largely statistical and doesn’t utilize LLMs. (Yes, I know LLM are itself based on adjacency statistics; that’s obviously not the sense I’m using the term here).
Fair enough and I agree that originality can reduce the likelihood of being flagged. I don’t think we’re actually that far apart there (y)

Where I remain skeptical is in how practical that advice is as a remedy. “Be more original” is difficult to operationalize, especially for writers who already have an established voice and aren’t consciously writing to avoid detectors. It also risks implying however unintentionally that writers should adjust their prose to satisfy an opaque algorithm, rather than to serve the story.

You’re also right that we don’t know the specifics of Lit’s system. If it’s largely statistical rather than LLM-based, that uncertainty cuts both ways: without transparency, authors can’t know whether originality, structure, syntax, or even revision tools are helping or hurting them. That lack of clarity is really what fuels the frustration.


So yes, originality may help but without knowing what the system actually measures, it’s hard to treat that as anything more than a general heuristic rather than a reliable solution.
That’s ultimately the concern I’m trying to articulate: not denial of the problem’s complexity, but the difficulty of navigating it responsibly when the rules of the game aren’t visible.
 
I agree it’s easy to blame the system, but it’s also reasonable to question it when authors with long, clean histories suddenly encounter rejections without changing their process.
This is the meat of the point I was trying to make.

If the only factor was that they system changed, it would stand to reason that far more rejections would be occurring than we are hearing about.

I know that if I did receive an AI rejection after all these years of publishing here, I certainly wouldn't jump immediately to the conclusion that the system was the problem. I would dig hard and deep into what I submitted to identify if it was something that I did differently, regardless of how subtle it might have been.

But that's just me, I guess. I look to control the things that I can control myself, and not what I can't. I can control my writing and what I submit. I can't control the system that reviews it.
 
All of this ☝️

I believe my stories have above average care put into them. I get anxiety if I find one typo. I’ve had people say they’d recognize the author even if my name wasn’t at the top. And I write thoughtfully.

I object to being told all of the above isn’t real just because I’ve never been pinged for AI. Turning on other authors is not a smart move IMO.
What’s painful for me (and others i assume :unsure:) is exactly what this process has pushed me into doing. I just submitted a Valentine’s Day contest entry and deliberately introduced mistakes not because that improves the story, but because I’m trying to avoid an opaque system mislabeling my work. That’s not healthy for writers, and it’s not good for the platform either.

I’m not asking for special treatment or claiming the system never works. I’m saying that when careful, voice-driven writing creates anxiety rather than confidence and when authors feel compelled to degrade their own work just to pass a filter that’s a signal something isn’t quite right.

This isn’t about dismissing other authors’ success. It’s about acknowledging that a system can function most of the time and still cause real harm in edge cases and that those cases deserve discussion without being framed as accusations against fellow writers. SO yeah don't know what else to say...:)
 
This is the meat of the point I was trying to make.

If the only factor was that they system changed, it would stand to reason that far more rejections would be occurring than we are hearing about.

I know that if I did receive an AI rejection after all these years of publishing here, I certainly wouldn't jump immediately to the conclusion that the system was the problem. I would dig hard and deep into what I submitted to identify if it was something that I did differently, regardless of how subtle it might have been.

But that's just me, I guess. I look to control the things that I can control myself, and not what I can't. I can control my writing and what I submit. I can't control the system that reviews it.
I respect that approach, genuinely and in many cases it is the right instinct to look inward first. Most of us already do that by default, especially writers who’ve been publishing here for years.

Where I think the disconnect happens is that some of us have dug hard and deep, repeatedly, and still can’t identify a meaningful change in process that explains the outcome. When the same tools, habits, and level of care suddenly start producing different results, self-inspection stops being a useful diagnostic after a point.

You’re right that if the system alone were the issue, we’d expect a flood of rejections. But uneven failure is exactly what you’d expect from a probabilistic system with thresholds some people never hit them, some hit them once, and some get stuck repeatedly, even with minimal differences in input.

I also agree that we should focus on what we can control. The problem is that right now the only “controllable” adjustment seems to be guesswork changing style, structure, or even intentionally lowering quality without knowing whether those changes are actually relevant. That’s a frustrating place to be, especially for writers who value consistency and craft.

So I don’t think this is about refusing personal responsibility. It’s about reaching the limits of what self-correction can do when the criteria being evaluated aren’t visible. At that point, questioning the system isn’t deflection it’s the only remaining variable left to examine :unsure:
 
I just submitted a Valentine’s Day contest entry and deliberately introduced mistakes not because that improves the story, but because I’m trying to avoid an opaque system mislabeling my work. That’s not healthy for writers, and it’s not good for the platform either.
It’s not a solution either. I think is a mistaken approach.
 
I respect that approach, genuinely and in many cases it is the right instinct to look inward first. Most of us already do that by default, especially writers who’ve been publishing here for years.

Where I think the disconnect happens is that some of us have dug hard and deep, repeatedly, and still can’t identify a meaningful change in process that explains the outcome. When the same tools, habits, and level of care suddenly start producing different results, self-inspection stops being a useful diagnostic after a point.

You’re right that if the system alone were the issue, we’d expect a flood of rejections. But uneven failure is exactly what you’d expect from a probabilistic system with thresholds some people never hit them, some hit them once, and some get stuck repeatedly, even with minimal differences in input.

I also agree that we should focus on what we can control. The problem is that right now the only “controllable” adjustment seems to be guesswork changing style, structure, or even intentionally lowering quality without knowing whether those changes are actually relevant. That’s a frustrating place to be, especially for writers who value consistency and craft.

So I don’t think this is about refusing personal responsibility. It’s about reaching the limits of what self-correction can do when the criteria being evaluated aren’t visible. At that point, questioning the system isn’t deflection it’s the only remaining variable left to examine :unsure:
Some have scoffed at this suggestion, but hear me out...

I have been publishing here since 2014, and admittedly write longer stories than most here. Consequently, I have always found it most convenient for me to upload a MS Word file rather than paste the content into the submission field.

As you probably know, MS Word files contain metadata that tracks several things in the document. From the time it was created and by who, any edits or reviews made, words pasted into it... etc. It provides a means to establish authenticity that pasting text into the submission field doesn't.

This could be a double-edged sword. If an uploaded file contained metadata that hinted at pasted content, as an example, it could trigger an AI suspicion. In that case, pasting into the submission field might provide a work-around. Conversely, if the metadata could help establish the authenticity of a document when uploaded, that might get something published that the system had previously rejected.

I mention this in case you want to take a shot at altering your submission method to see if it has any impact with your story.
 
Some have scoffed at this suggestion, but hear me out...

I have been publishing here since 2014, and admittedly write longer stories than most here. Consequently, I have always found it most convenient for me to upload a MS Word file rather than paste the content into the submission field.

As you probably know, MS Word files contain metadata that tracks several things in the document. From the time it was created and by who, any edits or reviews made, words pasted into it... etc. It provides a means to establish authenticity that pasting text into the submission field doesn't.

This could be a double-edged sword. If an uploaded file contained metadata that hinted at pasted content, as an example, it could trigger an AI suspicion. In that case, pasting into the submission field might provide a work-around. Conversely, if the metadata could help establish the authenticity of a document when uploaded, that might get something published that the system had previously rejected.

I mention this in case you want to take a shot at altering your submission method to see if it has any impact with your story.
I mean worth a try.
Thanks :unsure: :) (y)
 
I’m saying the opposite: that thoughtful, well-edited writing is more likely to resemble the kind of high-quality material LLMs were trained on.

OK, but a) I think it is optimistic to assume that LLMs have only been trained on high-quality material, and b) you are still implying that those of us who have successfully published on Lit have only got past the AI-detectors because our stories are not thoughtful or well-edited. That may be true for me, but there are better writers here than I am, whom you just insulted … again.

Obviously, there is something about your writing style that triggers an AI rejection, and your explanation amounts to "Because I write too well". If you are that good, get a publisher, or self-publish, and find a market away from the lousy product here.

Good luck.
 
OK, but a) I think it is optimistic to assume that LLMs have only been trained on high-quality material, and b) you are still implying that those of us who have successfully published on Lit have only got past the AI-detectors because our stories are not thoughtful or well-edited. That may be true for me, but there are better writers here than I am, whom you just insulted … again.

Obviously, there is something about your writing style that triggers an AI rejection, and your explanation amounts to "Because I write too well". If you are that good, get a publisher, or self-publish, and find a market away from the lousy product here.

Good luck.
And LLMs essentially find the most likely next element. Good writing is original and unique, bad writing is repetive and derivative. The essence of the mechanicm draws it to the mediocre writing. Writing the lowest common denominator effectively. Which is why AI written text read like slop.
 
Obviously, there is something about your writing style that triggers an AI rejection, and your explanation amounts to "Because I write too well". If you are that good, get a publisher, or self-publish, and find a market away from the lousy product here.
The issue that I think the OP has, and which has been expressed by many others, is that their writing style from what they have previously published hasn't changed, yet they are now seeing AI-based rejections.

I know that after publishing here for over a decade, if I suddenly had a story rejected for suspected AI content, I would be scratching my head too. The same for even newer writers who have two or three chapters of a story approved and then see a rejection of the third.

Difference in the writing could be a trigger, but how significant would those have to be? As I said in a previous post, I would take the initial step of diving into my own work before blaming the system or anything beyond my own control, but once that was done, what are the options?
 
AI scrapes what it can from the internet. It isn't trained on high-quality writing; it's trained by reading everything without bias and using it all. There is far more shit writing on the internet than good writing. So it uses more shit than good prose. Also, AI doesn't write with any logical understanding of emotions. It doesn't follow the outline well; it often writes beyond the scope of the outline, then writes the next section without referencing the previous one, as if the previous one doesn't exist. It produces fucked up messes.


The issue isn’t logic, it’s scope. If LLMs are trained on published, edited, voice-driven human writing, then detectors that flag similarity to that corpus are necessarily at risk of false positives especially for experienced writers. That doesn’t mean all such writing will be flagged, but it does mean some inevitably will be.

So the concern isn’t that “good writing equals AI,” but that current detectors can’t reliably tell the difference between human writing that influenced AI and AI output influenced by humans. Treating those tools as decisive rather than probabilistic is where the problem lies.

I’m not arguing against standards or moderation only against overconfidence in systems that aren’t designed to make definitive authorship claims.:)
 
OK, but a) I think it is optimistic to assume that LLMs have only been trained on high-quality material, and b) you are still implying that those of us who have successfully published on Lit have only got past the AI-detectors because our stories are not thoughtful or well-edited. That may be true for me, but there are better writers here than I am, whom you just insulted … again.

Obviously, there is something about your writing style that triggers an AI rejection, and your explanation amounts to "Because I write too well". If you are that good, get a publisher, or self-publish, and find a market away from the lousy product here.

Good luck.
I think this conversation has drifted into a position I’ve never actually taken, so let me reset it clearly.

I have not said and do not believe that writers who publish successfully here do so because their work is unthoughtful, poorly edited, or lacking voice. I’ve explicitly rejected that framing multiple times. If that implication is still being read into my comments, then I’ve failed to communicate clearly, but it is not the argument I’m making.


Nor am I claiming “I write too well.” What I am saying is that AI detectors are imperfect statistical tools, and imperfect tools will sometimes produce uneven results that cannot always be explained by author effort, care, or intent. That does not diminish writers who pass them, and it does not elevate writers who don’t.

As for LLM training data: I agree completely that it includes plenty of low and mid quality material. That only reinforces my concern, not weakens it because it means similarity signals become even noisier, not more reliable.

Finally, the suggestion that the appropriate response to a flawed moderation tool is “leave the platform” feels unnecessary. I’m here because I value the community, and because I believe it’s reasonable to discuss systemic issues without being told that disagreement equals arrogance or insult.


I wish you well too but I don’t think this needs to be adversarial. We’re talking about tools, not talent. :unsure: (y)
 
Some have scoffed at this suggestion, but hear me out...

I have been publishing here since 2014, and admittedly write longer stories than most here. Consequently, I have always found it most convenient for me to upload a MS Word file rather than paste the content into the submission field.

As you probably know, MS Word files contain metadata that tracks several things in the document. From the time it was created and by who, any edits or reviews made, words pasted into it... etc. It provides a means to establish authenticity that pasting text into the submission field doesn't.

This could be a double-edged sword. If an uploaded file contained metadata that hinted at pasted content, as an example, it could trigger an AI suspicion. In that case, pasting into the submission field might provide a work-around. Conversely, if the metadata could help establish the authenticity of a document when uploaded, that might get something published that the system had previously rejected.

I mention this in case you want to take a shot at altering your submission method to see if it has any impact with your story.
I use software designed for writing novels (yWriter), so everything is exported into a new Word file at once when I'm ready to run a grammar check. I suspect that would make my Word document more suspicious than pasting it into the submission box like I do now.
 
I'm not attacking you, and I'm sorry if you got that impression. I only stated what I've seen from AI, because I edit and rework for a writer who uses it for the first draft.
I think this conversation has drifted into a position I’ve never actually taken, so let me reset it clearly.

I have not said and do not believe that writers who publish successfully here do so because their work is unthoughtful, poorly edited, or lacking voice. I’ve explicitly rejected that framing multiple times. If that implication is still being read into my comments, then I’ve failed to communicate clearly, but it is not the argument I’m making.


Nor am I claiming “I write too well.” What I am saying is that AI detectors are imperfect statistical tools, and imperfect tools will sometimes produce uneven results that cannot always be explained by author effort, care, or intent. That does not diminish writers who pass them, and it does not elevate writers who don’t.

As for LLM training data: I agree completely that it includes plenty of low and mid quality material. That only reinforces my concern, not weakens it because it means similarity signals become even noisier, not more reliable.

Finally, the suggestion that the appropriate response to a flawed moderation tool is “leave the platform” feels unnecessary. I’m here because I value the community, and because I believe it’s reasonable to discuss systemic issues without being told that disagreement equals arrogance or insult.


I wish you well too but I don’t think this needs to be adversarial. We’re talking about tools, not talent. :unsure: (y)
 
I have not said and do not believe that writers who publish successfully here do so because their work is unthoughtful, poorly edited, or lacking voice.
Except your OP included the words:
This issue isn’t just about me. It affects anyone who writes thoughtfully, edits carefully, or develops a recognizable voice.
Which clearly implies that at least some of the work published here is unthoughtful, poorly edited, or lacking in voice. That may not have been your intention, but it is a reasonable conclusion.

What I am saying is that AI detectors are imperfect statistical tools, and imperfect tools will sometimes produce uneven results that cannot always be explained by author effort, care, or intent.
Well, then say that.

As for LLM training data: I agree completely that it includes plenty of low and mid quality material.
Something we can agree on.

That only reinforces my concern, not weakens it because it means similarity signals become even noisier, not more reliable.
Eh? So, why have you limited your comments to writers whose writes thoughtfully, edits carefully, or develops a recognisable voice.

Finally, the suggestion that the appropriate response to a flawed moderation tool is “leave the platform” feels unnecessary.
Your OP again:
I’m starting to lose hope about posting on Literotica at all, and that hurts more than I expected.
If you cannot post here, then leaving would be the logical response.

I realise that I'm coming across as unsympathetic. That is not my intent, but I don't think I have drawn unreasonable conclusions from what you have written.

To be constructive, I repeat my statement that Obviously, there is something about your writing style that triggers an AI rejection.

The alternative is that your stories are stuck in the approval black hole for whatever reason that has trapped many others. Have you tried resubmitting? That helped a couple of mine escape.
 
Back
Top