Why is everything labeled as AI now????? I can't even post a story anymore.

I took grammar in two different languages. It was a fairly useless field in both cases. I have zero training in English grammar. I'm glad you find it useful.

To

No, you were not foolish. I did count myself among the 5%.
Yeah, your writing makes it clear that you failed to learn. That doesn't make it useless in general, just in your specific case.

Why did you count yourself among the 5%? You've made it quite clear that you are proud of your poor writing quality, so you should be proud of the fact your in the 95% instead.

At least @AwkwardMD admitted the obvious. And he is far less petulant than @IWroteThis. Overall, among the insufferable people that populate this forum, I would give @AwkwardMD top marks.
Sorry, but explaining where you're wrong does not make one petulant. On the other hand, your stompy-foot dance in response to having your delusions shattered by reality probably does qualify.
 
I want to reset this discussion because I don’t think my actual questions were answered.

I understand the obvious points already:
AI detectors aren’t perfect
False positives exist
Moderation is overwhelmed

What I’m trying to understand is what is happening at the system and workflow level, not just that it happens.

So my questions are very specific:
1. What concrete textual signals is Literotica likely using to flag submissions as AI (sentence uniformity, vocabulary density, paragraph rhythm, editing polish, etc.)?
2. Are human-edited drafts more likely to be flagged than rougher first-pass writing?
3. Does repeated resubmission of rejected stories increase scrutiny or probability of future flags?
4. Is there any known difference in treatment between long-time accounts and newer ones once an AI flag appears?
5. Most importantly: what practical changes have actually worked for writers who were previously flagged and are now posting successfully again?

I’m not asking whether AI detection is “fair” or whether moderation is hard. I’m asking how to adapt to the current system as it actually exists, because right now genuine human writing is being blocked with no actionable feedback.

If the answer is simply “there is nothing you can do,” I’d rather hear that directly than keep guessing.
 
I want to reset this discussion because I don’t think my actual questions were answered.

I understand the obvious points already:
AI detectors aren’t perfect
False positives exist
Moderation is overwhelmed

What I’m trying to understand is what is happening at the system and workflow level, not just that it happens.

So my questions are very specific:
1. What concrete textual signals is Literotica likely using to flag submissions as AI (sentence uniformity, vocabulary density, paragraph rhythm, editing polish, etc.)?
2. Are human-edited drafts more likely to be flagged than rougher first-pass writing?
3. Does repeated resubmission of rejected stories increase scrutiny or probability of future flags?
4. Is there any known difference in treatment between long-time accounts and newer ones once an AI flag appears?
5. Most importantly: what practical changes have actually worked for writers who were previously flagged and are now posting successfully again?

I’m not asking whether AI detection is “fair” or whether moderation is hard. I’m asking how to adapt to the current system as it actually exists, because right now genuine human writing is being blocked with no actionable feedback.

If the answer is simply “there is nothing you can do,” I’d rather hear that directly than keep guessing.
1. Each AI detection system is proprietary and it would be trivial to adjust an AI model or edit an AI-generated text to bypass any set of known algorithms. So Literotica will NEVER divulge the secret sauce of their AI detection algorithm. That being said, anecdotally I've noticed that many of the users who believe that they are getting false positives have been using Gramarly or Microsoft Office 365, and both of those programs have been "updated" recently with a bunch of AI slop, so it's possible that people are having their files filled up with AI generated metadata that is triggering the detection algorithm.

2. I have not seen any evidence that adding or removing typos makes the AI detection algorithm more or less likely to go off.

3. I don't think so. But there is a "for the moderators" text box you can write in when you submit or resubmit. I know that sometimes the moderators read that and sometimes they don't. And I've had a few hilarious interactions where they've obviously read it and have not read it very well.

4. Accounts that have a record of successful publishing are more likely to go through moderation quickly. Your piece can still get overlooked for a lot of reasons, and there's a bug where overflow on a day can cause stuff to be lost to the moderator's inbox indefinitely. But at least some of the moderation is done on a voluntary basis and they voluntarily grab the likeliest apples from the crate by preference.

5. Practically, some success has been had with just contacting Laurel or with resubmission with explanation in the "to the moderators" box. But also, some people have reported their stuff languishing in pending for a long time after taking such steps. So I dunno.
 
Bonjour, mon texte a de nouveau été refusé sous prétexte qu'il aurait été généré par une IA. Je suis donc inscrit comme faux profil et je n'ai aucune solution. J'ai expliqué qu'il fallait utiliser un correcteur orthographique français, mais ils ne veulent rien entendre… Je suis bloqué et il n'y a plus aucun dialogue possible. De toute façon, depuis l'utilisation de Word Office 2024, il semble qu'un codage spécifique soit en place. Par contre, l'algorithme de Literotica est-il capable de comprendre les subtilités du français ? Quelle déception… et quelle amertume… Quelle solution me proposez-vous ?

Voilà une partie de mon texte refusé " Au moins vous pourrez vous faire une idée d'un texte français ?

Ce récit est l'histoire d'une jeune femme brisée, Brie, dont le corps et l'esprit ont été meurtris par celles qui se croyaient Dominas. Des mains inexpertes, cruelles sous couvertes de jeu, l'ont humiliée, harcelée, violentée, croyant incarner la puissance d'une figure qu'elles admiraient sans la comprendre. Elles n'étaient que l'ombre déformée d'une vraie Maîtresse — celle qui sait guider sans détruire, dominer sans anéantir, posséder sans avilir.
Dans ce chapitre, l'amorce d'une renaissance.
On y découvrira comment une âme blessée, perdue entre la honte et le désir incompris, va rencontrer celle qui lui tendra la main. Pas pour l'asservir, mais pour la reconstruire. Pas pour exploiter ses failles, mais même pour lui révéler sa vraie nature – cette part d'elle- qu'elle a enfouie sous les coups, les rires moqueurs et les larmes étouffées.
C'est l'histoire d'une chute, puis d'un premier pas. D'une jeune femme qui, après avoir été piétinée par de fausses reines, va rencontrer enfin la seule capable de lui rendre sa couronne.
La question n'est pas "Pourquoi se soumettre ?" Mais "Pourquoi elle ?"
Et la réponse, Brie la trouvera dans le regard de la Maîtresse — celle qui ne lui demandera pas de s'agenouiller pour la briser, mais pour la faire se relever.
10h00,
L'air était lourd, chargé de cette humidité typique du Sud qui colle aux peaux et aux souvenirs. Les chênes centenaires, de l'avenue Jackson, ployaient, sous le poids de leurs branches, leurs feuilles dorées crissant sous les pas pressés de Brie. Elle avait marché vite, trop vite, depuis l'arrêt de bus de la place, son carton à dessin serré contre sa poitrine comme un bouclier. Autour d'elle, les maisons coloniales aux colonnes blanches et aux balcons de fer forgé semblaient veiller, silencieuses, sur les secrets de cette ville universitaire où le temps s'étirait entre les cours magistraux et les soirées enfiévrées des fraternités. L'Université du Mississippi, Ole Miss, était à peine réveillée ce dimanche matin. Les pelouses du campus, encore humides de rosée, brillaient sous un soleil pâle, filtré par les nuages bas qui annonçaient l'orage de l'après-midi.
Brie, 20 ans, étudiante en histoire de l'art et arts appliqués, portait encore les traces de sa nuit blanche : des cernes violacés sous les yeux, des doigts tachés de fusain et d'huile de lin, et cette odeur tenace de terre de Sienne et de white spirit qui collait à ses vêtements. Elle avait passé les dernières heures à retoucher ses croquis, obsédée par les mots de Miss Queen, sa professeure d'art appliqué : Un jour, elle avait posé sa main sur son épaule. — Brie… Ces dessins sont puissants. Trop pour une simple étudiante en histoire de l'art.
Brie avait rougi, craignant une réprimande. Mais Miss Queen avait souri, compréhensive. — Tu as du talent. Un vrai. Et je connais quelqu'un qui pourrait t'aider à le canaliser.
"Ton trait est brut, Brie. Trop brut. Il a besoin d'un cadre. Carole saura t'en offrir un." Ces mots l'avaient hantée toute la semaine, mêlés aux échos de ses cauchemars — ces images d'Alexia, de ses rires moqueurs, de ses mains qui l'avaient forcée, humiliée, sous prétexte d'un pari stupide perdu lors d'une soirée alcoolisée. Depuis, les cours de dessin étaient devenus son refuge, le seul endroit où elle se sentait encore elle-même.
Trois jours plus tard, un mail dans sa boîte : "Chère Brie, Miss Queen m'a montré ton travail. Ton œil est rare. Ta main, aussi. Viens me voir vendredi à 17h. On parlera art. Et peut-être d'autre chose. — Carole."
Aucune adresse. Juste un quartier : AV Jackson Et une promesse implicite : "Ici, tu seras en sécurité."
Elle avait enfilé une jupe en laine grège, trop grande à la taille, héritée de sa sœur aînée, et un pull trop grand volé dans l'armoire de son père, un professeur de littérature qui ne possédait rien à ses "lubies artistiques". "Tu gâches ton potentiel, Brie", lui répétait-il en voyant ses toiles abstraites, ses études de corps décharnés, ses autoportraits en négatif où elle se représentait comme une ombre parmi les ombres. Mais Miss Queen, elle, avait vu quelque chose. Et Miss Queen ne se trompait jamais.
Brie marchait, perdue parmi les résidences cossues, les grilles dorées, les jardins impeccables. "Où… ?" Son téléphone vibre. Un message, sans expéditeur : "Tourne à gauche après le parc. Porte rouge. Sonne." Elle est envoyée surveillée !
Elle obéit, le cœur battant à tout rompre.
La maison de Carole se dressait au bout d'une allée de magnolias, une demeure antebellum aux murs de briques rouges, aux persiennes vert foncé et à la véranda ceinte de glycines séchées. Une plaque de cuivre, discrète, indiquait simplement : "Atelier Carole". Pas de numéro, pas de nom. Juste cette référence à Maîtresse Carole, une femme dont on chuchotait qu'elle avait formé les plus grands talents de la région — et brisé les autres. Brie s'arrête un instant lorsqu'elle vit la porte en chêne massif avec un heurtoir en forme de serpent sinueux, un symbole qu'elle connaît bien pour l'avoir vu sur le pendentif de Miss Queen. Elle appuya donc sur la sonnette pour s'assurer que tout changeait. La cloche rétentiont, longue et cristalline, un verre ou une invitation. Un carillon ancien, avec des notes profondes qui semblaient émaner des profondeurs du temps. Brie recula, parfois consciente de son apparence : ses cheveux auburn, une couleur généralement lissée, ébouriffée par le vent du matin, ses joues rougies par la course, ses lèvres gercées par le stress. Elle serrait contre elle son carton à dessin, utilisé aux monnaies, marqué du monogramme doré de Sainte-Clotilde, son ancien lycée d'art. À l'intérieur, des années de doutes, de colère, et de désirs inavoués dormaient, enfouies sous des couches de gouache et de sang séché.

La porte s'.ouvrit

Ce ne fut pas Carole qui apparut, mais une femme. Nu.
Brie recula, le soufflé coupé. La femme — Kate, comprit-elle plus tard — se tenait là, droite, sereine, comme si sa nudité était la chose la plus naturelle du monde. Son corps était sculpté, marqué par des années de soumission consentie : des cicatrices pâles zébraient ses hanches, ses seins, traces de cordes ou de lames, portées avec la fierté d'une guerrière. Un collier de cuir noir cerclait son cou, orné d'un pendentif en argent représentant deux femmes enlacées — le même symbole que celui que portait Miss Queen à l'oreille. Des bracelets — des cadeaux devina Brie — tintaient à ses poignets à un son métallique et sensuel qui résonna dans le silence de l'entrée.

— Bonjour, dit Kate d'une voix douce, mais ferme, teintée d'une autorité qui n'appartient qu'à ceux qui savent exactement qui ils sont. Son regard — bleu glacier, presque inhumain — glissa d'abord sur la planche à dessin, puis sur les doigts tachés de Brie, avant de remonter vers ses yeux. Un sourire, à la fois bienveillant et prédateur, étira ses lèvres. "Vous devez être la nouvelle recommandation de Mademoiselle Queen", murmura-t-elle, comme si elle connaissait déjà la réponse.

Derrière elle, l'entrée s'ouvrait sur un hall obscur, éclairé par des bougies dont la cire coulait le long de lustres en fer forgé. Une odeur de cuir, de cire d'abeille et de thé à la rose noire flottait dans l'air, mêlée à quelque chose de plus animal : la transpiration, le désir, la peur. Brie sentit son cœur s'emballer.

— J'ai… Elle déglutit, la gorge soudaine sèche. J'ai cru me tromper d'adresse.

Kate éclata d'un rire grave, charnel, qui fit vibrer les vitres.
— Non, ma chérie, murmura-t-elle en s'effaçant pour la laisser entrer. Tu es exactement là où tu dois être.

Derrière elle, dans la pénombre, une silhouette bougea. Carole. Brie ne la vit pas encore, mais elle la sentit, comme on sent l'orage avant qu'il n'éclate.
— Entre, ordonna Kate en désignant le seuil d'un geste du menton. Elle t'attend.
Mais laissez ton carton ici. Elle désigne une console en acajou, déjà encombrée de croquis encadrés — des études de corps ligotés, des esquisses de masques, des signatures qu'elle connaît trop bien : Ashley, Emmanuelle, et même celle, tremblante, de Lucie.
La jeune fille hésite, ses doigts se croustillants sur le carton.
— C'est… une école d'art ? demanda-t-elle, les yeux grands ouverts devant les gravures encadrées sur les murs : silhouettes entrelacées, jeux d'ombres et de cordes.
 
Last edited:
I want to reset this discussion because I don’t think my actual questions were answered.

I understand the obvious points already:
AI detectors aren’t perfect
False positives exist
Moderation is overwhelmed

What I’m trying to understand is what is happening at the system and workflow level, not just that it happens.

So my questions are very specific:
1. What concrete textual signals is Literotica likely using to flag submissions as AI (sentence uniformity, vocabulary density, paragraph rhythm, editing polish, etc.)?
2. Are human-edited drafts more likely to be flagged than rougher first-pass writing?
3. Does repeated resubmission of rejected stories increase scrutiny or probability of future flags?
4. Is there any known difference in treatment between long-time accounts and newer ones once an AI flag appears?
5. Most importantly: what practical changes have actually worked for writers who were previously flagged and are now posting successfully again?

I’m not asking whether AI detection is “fair” or whether moderation is hard. I’m asking how to adapt to the current system as it actually exists, because right now genuine human writing is being blocked with no actionable feedback.

If the answer is simply “there is nothing you can do,” I’d rather hear that directly than keep guessing.
Thanks for attempting to bring the train back onto the tracks.

1. I find it hard to conceptualize the admins of Literotica developing an AI detection process vastly different than the most popular ones currently on the Internet. Why would they reinvent the wheel? Based upon that small basis in logic, we can examine how these "off-the-shelf" detectors identify AI generated text. Here is a link to one explanation of what they look for. Again, I can't imagine the process here being that different, and if the LLM is weighted more heavily towards non-fiction writing than fiction, more false positives could be the result.
2. I think a good test of this would be for someone to submit an edited version of an existing story, originally published in a draft format, that significantly improves on the writing and see if it gets approved or rejected.
3. There are too many variables to determine this, including how the re-submission was presented, what note to the admin were included, and how much they pissed off the admins.
4. I have seen nothing to indicate this. Established writers here are more likely to know the guidelines better, and how best to communicate with Laurel when an issue arises, so these could factor into things.
5. I can only speak to the one proven success that I am aware of, and that was the recent report back to me that my suggestion to combine the parts of a story already approved and published with the newer parts that got rejected. The combined works got approved. This would tend to indicate that the combined work brought the percentage of suspected AI for the larger submission below whatever threshold the AI process here uses for rejection.

We don't see many instances where the changes made, resulting in a successful submission, are reported back.
 
Lately, I feel like the joy of writing is being slowly drained out of me, and I know I can’t be the only one dealing with this.


Everything is being labeled as “AI-written” now. Everything. I currently have more than a dozen stories that have been stuck in review for over seven months. Every single time I check back, there’s a new issue, a new delay, or another vague reason tied to AI detection even though I wrote every word myself.


That’s the most frustrating part: I did write them myself.


I outline my ideas. I draft them. I revise them. I obsess over phrasing, pacing, and tone. These stories didn’t come from a machine they came from my time, my effort, and my creativity. And yet, none of that seems to matter anymore because some automated system or policy decides they “sound like AI.”
What makes this even more discouraging is that I’ve tried doing everything “right.” I’ve used editors to help clean up grammar, improve clarity, and fix small mistakes the same way writers have always done. Not to generate content, not to replace my voice, just to polish my work. And even then, the stories are still labeled as AI.


What’s worse is the emotional toll this takes. Writing is supposed to be something I love. I get excited when I start a new story. I enjoy watching it come together. Finishing a piece should feel rewarding. But now, instead of feeling proud when I complete a story, I feel dread. I already know what’s going to happen when I try to post it. I already expect it to be flagged, delayed, or dismissed.


That anticipation alone is killing my motivation.


It’s gotten to the point where I hesitate to even start new projects. Why pour hours or days into something when experience has taught me it’ll likely sit in limbo for months, labeled as something it’s not? The constant suspicion feels unfair and discouraging, especially for writers who have been doing this long before AI was even part of the conversation.


I understand the concern about actual AI-generated content. I understand why platforms want to protect originality. But right now, it feels like genuine writers are being punished simply for having a clean style, strong grammar, or consistent structure. Since when did writing well become evidence against us?


This issue isn’t just about me. It affects anyone who writes thoughtfully, edits carefully, or develops a recognizable voice. If this continues unchecked, it risks pushing real writers away people who care deeply about storytelling and creativity.


I’m starting to lose hope about posting on Literotica at all, and that hurts more than I expected. I never thought I’d feel anxious or defeated just trying to share my own work.


I really want to hear from others:
Have you experienced this too or it's just me bitching about it?
Have your stories been delayed or flagged as AI when they weren’t?
How are you dealing with the loss of trust and motivation that comes with it?


Because right now, it feels like writing itself is on trial and that’s something none of us should have to fight just to be heard. Loosssiiinggg the hobby i learned to love so much...
I've not had that happen to me. Not once. It helps when one's genre of choice for writing is humor. AI is exceedingly poor at attempting to write humor. Laughably (ironic term, that) poor. I do appreciate your subtle self-congratulatory tone when bemoaning the cloud of suspicion you have been toiling under. - "genuine [as opposed to ???] writers are being punished simply for having a clean style, strong grammar, or consistent structure" - really? You DO realize that AI has never, once, had a thought or an idea or exhibited anything that is actual intelligence. Still, the practice of self-congratulating gives me the "ick".

Do you know how AI actually processes the written word? It's nothing like you might imagine. I'm not asking as a challenge but it's something I've followed as my PhD studies coincided with the explosion of neural networks, circa 1998ish. I've not read any of your stuff (not saying as an insult, just factually) but are you sure your "clean" is not another's "sterile" and your "consistent" is not another's "formulaic"? Not stating, rather, asking.
 
So my questions are very specific:
1. What concrete textual signals is Literotica likely using to flag submissions as AI (sentence uniformity, vocabulary density, paragraph rhythm, editing polish, etc.)?
Cannot be discussed. It is a black box. It only works if it remains a black box.
2. Are human-edited drafts more likely to be flagged than rougher first-pass writing?
No.

EDIT: no more or less likely, because SPAG is not a factor.
3. Does repeated resubmission of rejected stories increase scrutiny or probability of future flags?
Laurel's work flow, and the finer details, are unknown. At best, I understand the mechanism of Literotica's AI detector, the shape of what its doing, and not the practical, technical implementation. There are variables that I can only make best guesses about. I can certainly envision a "has this exact document been submitted before (compare hash) > if +2, reject" type of step. I can envision a "Number of rejections" column next to each author's name on a spreadsheet.

All guesses (and I am using the word Guess here to imply a difference from my (capital t) Theory of how Lit's AI detector works.
4. Is there any known difference in treatment between long-time accounts and newer ones once an AI flag appears?
Older account's do not often experience rejections, but at least 3 have happened that I know of. 1 (MillieDynamite) was likely a false positive and would be overcome by resubmitting with a note but has not tried. 1 (SimonDoom) reached out to Laurel via PM, and Laurel said it was a mistake, like the wrong button had been pushed and not a false positive. 1 (Kasumi_Lee) got a legitimate rejection for attempting to submit a Google Translate Spanish language translation of a story she had previously submitted.

On the whole, the older accounts that populate the AH have been largely rejection free, and it was this detail that sent me digging in the first place two years ago. GPTZero (and its peers) are functionally no better than a random number generator. We are not immune to rejections, but we still get them if we fuck around.
5. Most importantly: what practical changes have actually worked for writers who were previously flagged and are now posting successfully again?
Resubmitting with a note
Having the Volunteer Editor involved, who represents a known quantity as far as the site is concerned, attest to Laurel personally about the creation process. They can sign off on a reasonable timeline to have created a 20k word story ("he worked on it for 2 months, I saw multiple versions of it as it progressed")
 
Last edited:
1. What concrete textual signals is Literotica likely using to flag submissions as AI (sentence uniformity, vocabulary density, paragraph rhythm, editing polish, etc.)?
I should also add that communally guessing about this is a bad idea. We all agree with the need for a detector that works (as regrettable as it is that the existing one is so aggressive), and guessing about how to beat it represents Penetration (giggity) testing.

Unless the site asks you to try and sneak AI writing past the detector, don't do this.
 
I may be in the minority here, but I have a partly completed series that I'm seriously considering pulling and publishing on another platform whose leadership communicates with its members, doesn't have such a convoluted and nebulous policy on AI and all this other stuff that people rightly complain about...and doesn't have a fucking politics board.


I just gotta find it. The other platform, that is. I know where the PB is.
I have a similar issue. I am not the most prolific or famous writer but I seem to have generated some fans. I write everything today the same way I wrote yesterday and am currently fighting two posts rejected for supposed AI.
 
I have a similar issue. I am not the most prolific or famous writer but I seem to have generated some fans. I write everything today the same way I wrote yesterday and am currently fighting two posts rejected for supposed AI.
Did your previous workflow involve Grammarly, or a similar kind of automated grammar and spelling check?
 
I should also add that communally guessing about this is a bad idea. We all agree with the need for a detector that works (as regrettable as it is that the existing one is so aggressive), and guessing about how to beat it represents Penetration (giggity) testing.
Not remotely a bad idea, other than probably being pointless. There is no current technology that works for real world AI detection. Whatever they are using is no different than flipping coins. They may have some whitelists too, just to make it less obvious how bad the system is, but the rest is just random noise.

What could be the harm in trying to guess and game whatever parameters are being used? People spend hours and days writing their stories. It's natural to want to find a work around such a pointless and faulty technology.
 
Not remotely a bad idea, other than probably being pointless. There is no current technology that works for real world AI detection. Whatever they are using is no different than flipping coins. They may have some whitelists too, just to make it less obvious how bad the system is, but the rest is just random noise.

What could be the harm in trying to guess and game whatever parameters are being used? People spend hours and days writing their stories. It's natural to want to find a work around such a pointless and faulty technology.
Your not understanding how it works, or how it could work, is not a prerequisite for it working.
 
I have a similar issue. I am not the most prolific or famous writer but I seem to have generated some fans. I write everything today the same way I wrote yesterday and am currently fighting two posts rejected for supposed AI.
From e-mail feedback that I have received:

Hi,

You provided a comment to my posting about my chapter revisions being rejected for AI after a few months pending. You suggested it was due to the length of chapters throwing the percentage off, so I had the site delete the chapters, combined them alltogether (21 lit pages) and resubmitted. It was published the very next night after submitting.

Thanks so much for your suggestion. You were spot on and it was the perfect solution in my case.

MrC

If you have already successfully published parts of a story but are seeing later parts rejected, the same solution may work for you.

The theory is that the more words that your story has, the contents that are triggering an AI rejection would become a lower percentage of the total word count. We don't know what the threshold is on Literotica, so there are no guarantees.
 
If you have already successfully published parts of a story but are seeing later parts rejected, the same solution may work for you.
thank you @BobbyBrandt, I might try this once the whole story is finished. just publish as a single chunk.
I refuse to edit my story to try to game some fake AI detector, but merging it all into a single behemoth story is fine. ugly, but fine.
 
And if smashing a multi-chapter story into a single submission works on a regular basis, then whatever they're using needs to be throttled based on wordcount to reduce the false positives.
 
I want to reset this discussion because I don’t think my actual questions were answered.

I understand the obvious points already:
AI detectors aren’t perfect
False positives exist
Moderation is overwhelmed

What I’m trying to understand is what is happening at the system and workflow level, not just that it happens.

So my questions are very specific:
1. What concrete textual signals is Literotica likely using to flag submissions as AI (sentence uniformity, vocabulary density, paragraph rhythm, editing polish, etc.)?
2. Are human-edited drafts more likely to be flagged than rougher first-pass writing?
3. Does repeated resubmission of rejected stories increase scrutiny or probability of future flags?
4. Is there any known difference in treatment between long-time accounts and newer ones once an AI flag appears?
5. Most importantly: what practical changes have actually worked for writers who were previously flagged and are now posting successfully again?

I’m not asking whether AI detection is “fair” or whether moderation is hard. I’m asking how to adapt to the current system as it actually exists, because right now genuine human writing is being blocked with no actionable feedback.

If the answer is simply “there is nothing you can do,” I’d rather hear that directly than keep guessing.
Dude, here's what people who don't understand how modern AI works don't understand: it's not rule-based.

The best work that I've encountered that describes how neural networks operate in lay terms is "The Society of Mind" by Marvin Minsky. The book is organized the way Dr. Minsky describes the human brain as organized, into small discrete units (neurons, or one-page "chapters" in the book) that each focus on a limited number of inputs. These neurons then influence each other by firing (= "yes") or not firing (= "no"), sometimes with associated weights, in a "feed-forward" kind of way into other neurons, possibly in successive layers, depending on implementation. The neural network that is constructed (initialized, anyway) then runs on test data producing matches against already-known outcomes. This process is known as "training". The "back propagation" portion of "training" can also be described as "machine learning". The initial network setup maps input neurons to known data that might influence decision making. The network is then laid out. Ideally it will self-organize, meaning that as with a human baby, it will pare neural connections that result in nonsense conclusions and/or adjust weights until the system's predictive capability matches the already-known outcomes. Connections between neurons, and weights of those connections (either digital or floating-point analog, depending on implementation) are snapshotted. Then the resulting network is applied to data for which the results are not already known, such as one of your rejected stories.

The thing is, while the input layer can be known and the outputs are known, there is no good way to characterize in human terms what happens in the many neurons (that might be "layers" of neurons) between the input and output layers. I've seen the selling points of some vendors' APIs (I'm looking at you, Microsoft) being described as able to do this translation between numbers and human-understandable rules, but that was hyperbole. Because neural networks are not rule-based, and asking for rules that can prevent interpretation by a neural network into outputting that a work of fiction is written by (or influenced by) an AI is not feasible.

Describing an AI as a "black box" isn't obfuscation, it is how neural networks operate.

Characterizing this as "there is nothing you can do" is the inevitable conclusion, not a dismissal.

But there is, in fact, something you can do. I laid out an example of that yesterday. Most observers here probably dismissed it, though at least one person did not, which is a good sign. You're welcome to draw what conclusions you can from that small, limited-scope example. I made an attempt to do so, but that attempt was heuristic rather than algorithmic. Because that's how neural networks operate. And that's probably the best answer you're going to get.

Yes, a previous generation of "artificial intelligence" engines was rule-based and therefore human-understandable: expert systems. Humans ("knowledge engineers") laid out the rules and the decision trees. They had limited utility, and realization of this to the marketplace resulted in a crash of AI-related funding. Then the concept of neural networks arose. It also failed, with similar result to funding reduction. Then people started figuring out that if you just threw more and better training data, and increasingly powerful computers, at the problem, that results would improve. And they have. But they still aren't perfect. They won't be perfect a year from now, either, though the difference in the quality of results from today's systems and systems of even one year ago, much less five years ago, is marked.

It occurs to me that the current NFL playoff system could be characterized as a "neural network" of sorts. It is a totally inaccurate characterization but might be useful as a rhetorical device. You have an input layer (the wild-card round) leading to the next layer (the divisional round), eventually resulting in a Super Bowl champion. The process between the inputs and the result is unpredictable, not deterministic. Analysts get paid (well, in some cases) to break it down into the tiniest details that might or might not be correct, or even relevant. Nonetheless, an outcome results. Some might even complain that the outcome is unfair.
 
Last edited:
And if smashing a multi-chapter story into a single submission works on a regular basis, then whatever they're using needs to be throttled based on wordcount to reduce the false positives.
I doubt that it will be that simple.

The suggestion that I offered is targeted at writers who had successfully published one or more parts of a story and then seen subsequent parts rejected for suspected AI content. My thought was that the style of the writer hadn't likely changed much between chapters, so why would some parts get approved and others rejected. I felt that a logical cause could be that while their style of writing might leave all of their parts containing suspicious content for the AI detector, some parts had less than others. Spreading that suspicious content over more words might put the entire story below a detectable threshold.

The theory could also backfire on some. If one part was extremely high is suspicious content and got combined with other parts, it could end up tainting the entire story.
 
I doubt that it will be that simple.

The suggestion that I offered is targeted at writers who had successfully published one or more parts of a story and then seen subsequent parts rejected for suspected AI content. My thought was that the style of the writer hadn't likely changed much between chapters, so why would some parts get approved and others rejected. I felt that a logical cause could be that while their style of writing might leave all of their parts containing suspicious content for the AI detector, some parts had less than others. Spreading that suspicious content over more words might put the entire story below a detectable threshold.

The theory could also backfire on some. If one part was extremely high is suspicious content and got combined with other parts, it could end up tainting the entire story.
And they lose their comments, favorites, and views. They might never be able to upload the thing that was already here, and at that point they're more likely to quit altogether. It’s advice that might work some of the time, but at a high cost and with risk.

It’s bad advice.

Better advice is to encourage them to start over before they reach a boiling point.
 
Describing an AI as a "black box" isn't obfuscation, it is how neural networks operate.
Not going to argue this, but I wonder if we’re mixing up two different things here: @AwkwardMD said that the criteria Lit uses to flag stories is part of a black-box setup - not that AI itself is.

There's a lot of discussions about AI detectors. Literotica states that they don't use AI to review stories prior to publishing (see bullet point 5). So whatever tool they’re using, it isn’t an AI one. And maybe you can have an AI detector that isn't based on machine learning models, what do I know.
 
Dude, here's what people who don't understand how modern AI works don't understand: it's not rule-based.

The best work that I've encountered that describes how neural networks operate in lay terms is "The Society of Mind" by Marvin Minsky. The book is organized the way Dr. Minsky describes the human brain as organized, into small discrete units (neurons, or one-page "chapters" in the book) that each focus on a limited number of inputs. These neurons then influence each other by firing (= "yes") or not firing (= "no"), sometimes with associated weights, in a "feed-forward" kind of way into other neurons, possibly in successive layers, depending on implementation. The neural network that is constructed (initialized, anyway) then runs on test data producing matches against already-known outcomes. This process is known as "training". The "back propagation" portion of "training" can also be described as "machine learning". The initial network setup maps input neurons to known data that might influence decision making. The network is then laid out. Ideally it will self-organize, meaning that as with a human baby, it will pare neural connections that result in nonsense conclusions and/or adjust weights until the system's predictive capability matches the already-known outcomes. Connections between neurons, and weights of those connections (either digital or floating-point analog, depending on implementation) are snapshotted. Then the resulting network is applied to data for which the results are not already known, such as one of your rejected stories.

The thing is, while the input layer can be known and the outputs are known, there is no good way to characterize in human terms what happens in the many neurons (that might be "layers" of neurons) between the input and output layers. I've seen the selling points of some vendors' APIs (I'm looking at you, Microsoft) being described as able to do this translation between numbers and human-understandable rules, but that was hyperbole. Because neural networks are not rule-based, and asking for rules that can prevent interpretation by a neural network into outputting that a work of fiction is written by (or influenced by) an AI is not feasible.

Describing an AI as a "black box" isn't obfuscation, it is how neural networks operate.

Characterizing this as "there is nothing you can do" is the inevitable conclusion, not a dismissal.

But there is, in fact, something you can do. I laid out an example of that yesterday. Most observers here probably dismissed it, though at least one person did not, which is a good sign. You're welcome to draw what conclusions you can from that small, limited-scope example. I made an attempt to do so, but that attempt was heuristic rather than algorithmic. Because that's how neural networks operate. And that's probably the best answer you're going to get.

Yes, a previous generation of "artificial intelligence" engines was rule-based and therefore human-understandable: expert systems. Humans ("knowledge engineers") laid out the rules and the decision trees. They had limited utility, and realization of this to the marketplace resulted in a crash of AI-related funding. Then the concept of neural networks arose. It also failed, with similar result to funding reduction. Then people started figuring out that if you just threw more and better training data, and increasingly powerful computers, at the problem, that results would improve. And they have. But they still aren't perfect. They won't be perfect a year from now, either, though the difference in the quality of results from today's systems and systems of even one year ago, much less five years ago, is marked.

It occurs to me that the current NFL playoff system could be characterized as a "neural network" of sorts. It is a totally inaccurate characterization but might be useful as a rhetorical device. You have an input layer (the wild-card round) leading to the next layer (the divisional round), eventually resulting in a Super Bowl champion. The process between the inputs and the result is unpredictable, not deterministic. Analysts get paid (well, in some cases) to break it down into the tiniest details that might or might not be correct, or even relevant. Nonetheless, an outcome results.
Respectfully, while this is how it works out in the larger tech landscape, it is not how it works here.
 
Not going to argue this, but I wonder if we’re mixing up two different things here: @AwkwardMD said that the criteria Lit uses to flag stories is part of a black-box setup - not that AI itself is.

There's a lot of discussions about AI detectors. Literotica states that they don't use AI to review stories prior to publishing (see bullet point 5). So whatever tool they’re using, it isn’t an AI one. And maybe you can have an AI detector that isn't based on machine learning models, what do I know.
Thank you, yes.

I have said this many times.
 
Back
Top