First AI Rejection Today - Machine Translation

Kasumi_Lee

Really Really Experienced
Joined
May 2, 2013
Posts
382
I know there are a bunch of threads already active regarding AI rejection, but to my knowledge this is a unique case which hasn't been addressed yet.

Way back in June, I published a story called Ambush in a Hotel Room, and last week I submitted a German translation of the same story. By my count, I've published twenty stories or story chapters which are translations of English stories published either here or elsewhere. Yes, I used software to translate the stories, but all twenty translated submissions were approved, so until this morning I'd taken for granted that this doesn't count as AI-generated writing.

This morning, the German version of Ambush in a Hotel Room was sent back because Laurel (or whatever screening software she's using) flagged the story as AI generated. For the record, the original English version is entirely original to me, and the screening software must have thought so, too, or else the English version wouldn't have been approved in the first place. In fact, I've never used any program besides Microsoft Word to write or edit my stories, not even Grammarly or ProWriting Aid, which I think are massively overrated. I have resubmitted the translated story with an explanatory note, but without any changes to the story because it's not "AI generated" for reasons given below.

There is currently nothing in Literotica's AI Policy about the use of machine translation, and I hope that this is a false positive, because it would be preposterous (and pernicious) to define translated stories as "AI-generated". Translation software does use AI to improve the quality of the translations, but "generating" a translation is nothing more than converting original text from one language into another. The translation program doesn't actually "generate" any original text, it's simply translating the user's writing.

The AI Policy claims that programs which replace "original human written text with generic AI generated text" are also problematic, but this too doesn't and shouldn't apply to machine translation. Even if the translation program provides options for different translations, these options have to do with how best to translate the original human-written text, and so can't be considered a form of "rewriting" human-written text, especially since the program would simply be replacing it's own translation (of the human original) with an alternative translation.

I understand there probably aren't too many users publishing translations of their stories, which might explain why it hasn't been dealt with before. It really does need to be addressed, because the notion that machine translations of original human-written stories should count as "AI-generated" is absurd. It would be like saying that a human translator of a story should be credited as the "original" author of the story instead of for translating someone else's work, entitling them to the copyright and the royalties.
 
Last edited:
A few years ago I'd have agreed with you. The trouble is that the latest machine translation programs use a variety of methods: their basic translation software, a database of existing translations similar to translation memory programs (like Trados, MemoQ and so on) and AI.

Unfortunately, despite what MT companies have been claiming for at least 25 years (to my personal knowledge), machine translation is nowhere near as good as human translation. So they're being tempted by the quick and easy solution of AI to make it better, which it doesn't.

So yes, it's quite possible that the same programs that didn't cause any issues in the past will now trip up the detectors.
 
Just like any spell checker/grammar checker, you have to be careful using translation software unless you have at least a conversational knowledge of the language you're translating to. This isn't a new problem. I remember taking a Latin class in high school and one of the stories we read was about William Shakespeare. In that story, his name was written as, "William the spear shaker". Unless you can read what the translator wrote, you won't know if it made any errors.

The problem with translators is that some miss things like different spellings for gender and the fact that the same word might mean two different things depending upon the context. There's also the problem that many if not most languages don't follow the syntax that English uses.

I only use a translator to translate a phrase or two said by a character and I usually follow up with a translation into English by that character or by another character.
 
A few years ago I'd have agreed with you. The trouble is that the latest machine translation programs use a variety of methods: their basic translation software, a database of existing translations similar to translation memory programs (like Trados, MemoQ and so on) and AI.

Unfortunately, despite what MT companies have been claiming for at least 25 years (to my personal knowledge), machine translation is nowhere near as good as human translation. So they're being tempted by the quick and easy solution of AI to make it better, which it doesn't.

So yes, it's quite possible that the same programs that didn't cause any issues in the past will now trip up the detectors.
That's still not the same thing as generating original text. The quality of the translation is a separate matter. No matter how the program comes up the translation, the source material is original to the writer (me). It's ludicrous to claim that the resulting story in translation was "generated" by the AI. That's like claiming that a human translator is the "real" author of the translation they created from someone else's work.
 
That's still not the same thing as generating original text. The quality of the translation is a separate matter. No matter how the program comes up the translation, the source material is original to the writer (me). It's ludicrous to claim that the resulting story in translation was "generated" by the AI. That's like claiming that a human translator is the "real" author of the translation they created from someone else's work.
But you see how an AI-assisted translation could trigger an AI detector?

Also: human translators nowadays are credited as (at the very least) co-creators of translated works. Without a contract in place that says otherwise, a translator owns the copyright to their translation. This has no real bearing on your situation, I'm just clearing up the point.
 
Just like any spell checker/grammar checker, you have to be careful using translation software unless you have at least a conversational knowledge of the language you're translating to. This isn't a new problem. I remember taking a Latin class in high school and one of the stories we read was about William Shakespeare. In that story, his name was written as, "William the spear shaker". Unless you can read what the translator wrote, you won't know if it made any errors.

The problem with translators is that some miss things like different spellings for gender and the fact that the same word might mean two different things depending upon the context. There's also the problem that many if not most languages don't follow the syntax that English uses.

I only use a translator to translate a phrase or two said by a character and I usually follow up with a translation into English by that character or by another character.
I agree with you that the quality of the translation often leaves something to be desired, but the issue isn't the quality of the translation; the issue is the story being rejected on the grounds that it's "AI generated". I've published twenty other machine translations here on Literotica, and that didn't stop Laurel and whatever filtering software she uses from approving each of them. Either this is the first time the software has noticed (which it shouldn't, because the original English version was written by me unaided), or this is a false positive. Either way, there's nothing in Literotica's AI policy about machine translations, which rely on AI but don't generate any original content.
 
Last edited:
But you see how an AI-assisted translation could trigger an AI detector?

Also: human translators nowadays are credited as (at the very least) co-creators of translated works. Without a contract in place that says otherwise, a translator owns the copyright to their translation. This has no real bearing on your situation, I'm just clearing up the point.
I've hired human translators for published works on Amazon and Smashwords/D2D, and of course I credit them as such. The point I want to stress is that the original story that the translation program is translating is the work of a human author, and so even if the filtering software could reliably detect an actual AI-written work (which it can't), it's totally illogical to suggest that the resulting translation is an AI-generated piece of writing. It generates a translation of a pre-existing written work in a different language.
 
Last edited:
There's a distinction that needs to be made between using AI for content and using it for text.

Most of the complaints here in the AH about rejection for AI use have been textual: people denying that they've used Grammarly or whatever.

But there have also been posts by people advocating the use of AI to generate plot ideas, or characters, or whatever.

Both these uses, apparently, contravene Lit's AI policy. And just because the words you feed into the TM are your original content, the words that come out on the other end are generated in part by AI. And that AI is where the problem lies. It's scraping German texts the same way English texts are being scraped to feed ChatGPT and Copilot and all the others.

The content is yours, but the German words aren't.

But whatever the case, it's not me you need to convince. It's no-one here in the AH. It's Laurel.
 
Have you tried translating the German back into English? Does it look all weird afterwards?
 
Most of the complaints here in the AH about rejection for AI use have been textual: people denying that they've used Grammarly or whatever.
I've never used Grammarly on any other program to write my stories, and so far it's worked out for me.
But there have also been posts by people advocating the use of AI to generate plot ideas, or characters, or whatever.
I agree with the spirit of Literotica's prohibition on AI-usage. Using AI to replace one's own imagination is lazy and will degrade the quality of the story.
And just because the words you feed into the TM are your original content, the words that come out on the other end are generated in part by AI. And that AI is where the problem lies. It's scraping German texts the same way English texts are being scraped to feed ChatGPT and Copilot and all the others.

The content is yours, but the German words aren't.
That's absolutely nonsensical. The program is using MY story to create it's translation. There is NOTHING original being generated by the translation software, any more than a human translator is a co-author of the original story by virtue of having created the translation.

Besides, how many "original" ways are there to translate a given word or phrase from language to another? The German and other non-English texts are as much mine as the English originals are. Otherwise, why not credit Microsoft Word since I use that program to write all my stories in the first place?
But whatever the case, it's not me you need to convince. It's no-one here in the AH. It's Laurel.
She approved the previous twenty submissions (and some are in French and other languages), so it's not clear why the 21st submission should be rejected when it was generated in the exact same way. At the very least, a consistent policy on machine translations and their place in Literotica's AI policy is needed, or else the AI policy is as arbitrary as the junk software claiming to detect AI writing, which would tarnish its credibility.
 
Have you tried translating the German back into English? Does it look all weird afterwards?
Most of the time, no, but sometimes yes, but that's a problem of quality, not authorship. The English original is my work and so machine "generated" translations aren't original creations of the program.
 
That's absolutely nonsensical. The program is using MY story to create it's translation. There is NOTHING original being generated by the translation software, any more than a human translator is a co-author of the original story by virtue of having created the translation.
Again, there's a distinction between content and words.
Besides, how many "original" ways are there to translate a given word or phrase from language to another? The German and other non-English texts are as much mine as the English originals are. Otherwise, why not credit Microsoft Word since I use that program to write all my stories in the first place?
I've spent a large chunk of the past 25 years editing translations. It's my job. I'm good at it, and my peers and clients agree - and that includes clients up to the level of European institutions. So I'm going to ask you to believe me: there are at least as many ways to translate a word, sentence or paragraph as there are ways to phrase it in the original.

Languages don't work on a word-for-word basis, or even phrase-by-phrase. Sentence structure is different, paragraph structure is different. Conveying the same meaning in a different language generally involves analysing the precise meaning, nuances and intended emphasis of the passage - and the passages before and after it.
She approved the previous twenty submissions (and some are in French and other languages), so it's not clear why the 21st submission should be rejected when it was generated in the exact same way. At the very least, a consistent policy on machine translations and their place in Literotica's AI policy is needed, or else the AI policy is as arbitrary as the junk software claiming to detect AI writing, which would tarnish its credibility.
Like I said in my original reply to your post: MT has evolved in recent years, with the rise of AI. And the output is likely to flag AI detectors. Not for the content, but for the phrasing. You might argue that the detection software is junk, but if it's going to flag potential textual issues in English, it's going to flag those same issues in other languages too.
 
I think the problem is that translations aren't a word for word exchange from one language to another. There is an element of change the happens within the translated text that would normally rely on a native or fluent speaker of the language to find the right nuance for the intent of the phrase rather than the technical word exchange.

Basically, there's an element of interpretation that happens in translation that makes *some* of the content original to the translator rather than the author. That's where this hits the AI issue. The more interpretive the machine translator is being, the less similar that translation is to the original authors words.
 
Basically, there's an element of interpretation that happens in translation that makes *some* of the content original to the translator rather than the author. That's where this hits the AI issue. The more interpretive the machine translator is being, the less similar that translation is to the original authors words.
I want to agree, but that's the inverse of what people usually criticize machine translations for. Translation software has a tendency to translate phrases word for word, which causes problems when it produces literal translations of idioms. That would mean it's a bad translation, which would be an issue with quality rather than authorship. Does this mean the software I've been using has improved so much that the translation I submitted is now "too good"? I don't know.
 
I want to agree, but that's the inverse of what people usually criticize machine translations for. Translation software has a tendency to translate phrases word for word, which causes problems when it produces literal translations of idioms. That would mean it's a bad translation, which would be an issue with quality rather than authorship. Does this mean the software I've been using has improved so much that the translation I submitted is now "too good"? I don't know.
Most likely it's no longer using a word for word translation model and is using AI to interpret intent of the original words, which makes it AI content within the translation.
 
Again, there's a distinction between content and words.

I've spent a large chunk of the past 25 years editing translations. It's my job. I'm good at it, and my peers and clients agree - and that includes clients up to the level of European institutions. So I'm going to ask you to believe me: there are at least as many ways to translate a word, sentence or paragraph as there are ways to phrase it in the original.

Languages don't work on a word-for-word basis, or even phrase-by-phrase. Sentence structure is different, paragraph structure is different. Conveying the same meaning in a different language generally involves analysing the precise meaning, nuances and intended emphasis of the passage - and the passages before and after it.

Like I said in my original reply to your post: MT has evolved in recent years, with the rise of AI. And the output is likely to flag AI detectors. Not for the content, but for the phrasing. You might argue that the detection software is junk, but if it's going to flag potential textual issues in English, it's going to flag those same issues in other languages too.
What I hate is the inconsistency. If this were a first-time rejection for a machine translation, and especially if the AI policy had anything to say about machine translations, I'd probably suck it up and accept it. But twenty successfully submitted translations followed by what feels like an arbitrary rejection is infuriating. Are the translations now "too good" to pass muster? If that's true, it feels like punishment for improved quality on the ostensible grounds that the work is no longer mine.
 
The most interesting aspect of this is the subtext… AI can theoretically create a one-language world, but then the ones who control the translations will control how the world communicates.
 
Most likely it's no longer using a word for word translation model and is using AI to interpret intent of the original words, which makes it AI content within the translation.
But the original words are still mine, and the fact that the program uses AI to improve the translation doesn't and shouldn't change that. This is different from a program like Grammarly creating something ex nihilo rather than converting something already created. Either way, since the previous twenty submissions were all approved, I'm hoping this is false positive; otherwise, the takeway seems to be that improved-quality translations are too good for Literotica.
 
The most interesting aspect of this is the subtext… AI can theoretically create a one-language world, but then the ones who control the translations will control how the world communicates.
Good thing I already know Chinese and Japanese, I can bypass the translinguistic gatekeepers.
 
But the original words are still mine, and the fact that the program uses AI to improve the translation doesn't and shouldn't change that. This is different from a program like Grammarly creating something ex nihilo rather than converting something already created. Either way, since the previous twenty submissions were all approved, I'm hoping this is false positive; otherwise, the takeway seems to be that improved-quality translations are too good for Literotica.
The original words, yes, they are yours. The translated words aren't, they are an interpretation of your words, but only if the program is using AI to enhance translations.

It's no different than people letting grammarly reword sentences for them, which also counts as AI content.
 
Are the translations now "too good" to pass muster?
They've become good enough to seem acceptable to a casual reader. They're good for consuming information - if you want to read something on a foreign-language website, for example, or if you want to read reviews on AirBnB. They're not good at producing information, though.

They're inconsistent - I've seen translations where the same word was translated in three different ways in three consecutive sentences - and the quality is unreliable. A lot of the material is drawn from bilingual text that the program finds online, and it copies them with all existing errors. I've also seen MT adding whole chunks of information from unrelated texts, simply because it can't identify where a human translator has included comments or not ended a segment or whatever.

Also, it has a tendency to skip sentences or paragraphs. Words too, but that's not as glaring as randomly finding a whole paragraph in the original language.

The issue is that the MT companies keep shouting how good their products are, and people want to believe them because it's convenient. I'm not saying you shouldn't use them to translate your amateur writings on an amateur publishing website. But the quality improvements those companies boast about only hide the issues. It's still a poor translation, with the added problems that you seen in AI-generated (or AI-assisted) texts in English too.
 
What I hate is the inconsistency. If this were a first-time rejection for a machine translation, and especially if the AI policy had anything to say about machine translations, I'd probably suck it up and accept it. But twenty successfully submitted translations followed by what feels like an arbitrary rejection is infuriating.
I suspect that Lit has always known you were submitting machine-generated translations, and that this rejection of one instance is a slap on the wrist to get your attention. I see your catalog has some stories with French AND German translated repostings. I don’t remember what the context was, but I do remember Laurel telling me once that the site doesn't want to host redundant content.** Machine generated translations are toe-ing that line.

I can also tell you that what you're doing violates the TOS for the Kindle store. Machine translations of content will get your account sent to the shadow realm. I appreciate that Lit is a different site with its own rules, but I think it's worth noting when a significant player in the field takes a stance like that (and for context that rule has been in place since at least 2020) At any given moment, Lit can simply decide that "If a rule is good enough for Amazon, it's good enough for us."

EDIT:** In other words, if the content of your German language submission is what a user would get if they simply ran your existing story through Google Translate, then the site would prefer readers do that themselves and only host your original work in the language you originally wrote it.

EDIT2: I did not make my second point clearly. Perhaps this is not a case of "oh here's this blameless thing why do they suddenly have a problem with it?", but rather "This was always a problem, but they let it go since you are a consistent (authentic) author. Now it seems like it's becoming a pattern and they're not okay with that."
 
Last edited:
Back
Top