Thoughts on AI checkers

To pick a random example. @onehitwanda would never have had a single story published since the site introduced the AI checks if the above were bars to publishing.
Either this means my grammar, punctuation and spelling are so good that I'd pass the AI checker every time, or that they're so bad I'd never be mistaken for AI. :ROFLMAO::LOL:

More seriously... I do sometimes wonder if the fact that I use Proper English SPAG rather than American SPAG means that I somehow shortcircuit or circumvent whatever processes there are. 🤷‍♀️
 
More seriously... I do sometimes wonder if the fact that I use Proper English SPAG rather than American SPAG means that I somehow shortcircuit or circumvent whatever processes there are. 🤷‍♀️
That is possible. But given that American and British SPAG have only diverged relatively recently (compared to how long literature has been around), I’d have thought a lot of British English text has been used to train LLMs.
 
That is possible. But given that American and British SPAG have only diverged relatively recently (compared to how long literature has been around), I’d have thought a lot of British English text has been used to train LLMs.
It might be that newer text are treated as closer to canonical truth, since I imagine the default of any LLM is to attempt to ape current modes. If you wanted Byron, you'd need to explicitly ask for it, whereas a standard prompt would give whatever was most common in the dataset - or at least, it would give whatever the model was weighted for.

I'm not a user, though, so I can't speak from actual experience, only from how the little engineer inside me imagines it would have to work.
 
It might be that newer text are treated as closer to canonical truth, since I imagine the default of any LLM is to attempt to ape current modes. If you wanted Byron, you'd need to explicitly ask for it, whereas a standard prompt would give whatever was most common in the dataset - or at least, it would give whatever the model was weighted for.

I'm not a user, though, so I can't speak from actual experience, only from how the little engineer inside me imagines it would have to work.
Truth is we don’t know. And given LLMs have some black box features, no one does.
 
I posted this in How To… in response to a question. It might be of use to others. Though, as I say, I’m not an expert in this area:



The site used AI detection software, which is highly likely to generate false positives and false negatives […]

I’m not an expert. But I believe the software looks at two things. Do the words chosen in the text vary from the norm? Is the sentence structure and length variable or consistent.

AI written text tends to have rote word choices and unvarying sentence structures and length.

So:

The cat sat on the mat.

The feline arranged itself on the shag rug.


The second probably scores a lower probability of AI.

Or:

Emily woke up in the morning. She went and brushed her teeth. Then she got dressed for work. The bus was late this morning.

Emily woke. It was still early. She brushed her teeth, tried to figure out something appropriate for work, and got dressed. Waiting at the bus stop, it was raining; where the fuck was the bus this morning?


The second probably scores a lower probability of AI.
https://originality.ai/blog/perplexity-and-burstiness-in-writing#
 
If you find your own unique voice, develop your own writing style and treat writing like the art it is, you won't get a false positive.
Except that many people here have done just that and gotten false positives.

OTOH, I remember one poster claiming he wrote a story (100% human), got a false AI rejection, then ran it through an AI humanizer (which is AI) and getting it approved. So there are also false negatives.

Just blithely saying "write it yourself and you won't get rejected" is simply not true. Not always even when you prove it.
 
Except that many people here have done just that and gotten false positives.

OTOH, I remember one poster claiming he wrote a story (100% human), got a false AI rejection, then ran it through an AI humanizer (which is AI) and getting it approved. So there are also false negatives.

Just blithely saying "write it yourself and you won't get rejected" is simply not true. Not always even when you prove it.
I literally mentioned at least one example (autism) which might account for a false positive. It's not 100% but developing your own style helps tremendously. Why would you want to write like everybody else anyway?
 
I literally mentioned at least one example (autism) which might account for a false positive. It's not 100% but developing your own style helps tremendously. Why would you want to write like everybody else anyway?
I got into a big fight (I know, incredible to believe, right 🙄?) with another autistic person here about the subject of people with ASD writing robotically. I hate that stereotype. Then they appeared to have a flavor of ASD that was at least a little more profound than mine. And people like me aren’t great at backing down at the best of times 😬.
 
If you follow the news, you will know that Anthropic (the AI company that has probably done most to monetize their product) had a falling out with the Department of Defense. DOD wanted to be able to use Claude for mass surveillance and to run fully autonomous weapons (Terminator much?). Anthropic refused and was punished by being declared a supply chain risk by DOD. So any contractor who works with DOD can’t use Anthropic products.

However, more recently, OpenAI (behind ChatGPT and Microsoft Copilot) have signed a deal with the DOD. OpenAI claim they have redlines around mass surveillance and fully autonomous weapons, but that begs the question of why DOD agreed a contract with them and not Anthropic. The clauses of the DOD / OpenAI contract that have emerged (and I should state you can’t draw firm conclusions from only part of a contract) are viewed by legal experts as allowing for the very things that Anthropic refused to do.

This is not a political point I’m making here, it’s one about LLMs being used by our Government for purposes that - to say the very least - don’t have widespread popular support.
 
After successfully publishing scores of stories here over several years, I recently had my first story rejected for AI. I think it was because my FMC uses a lot of latinate vocabulary in her speech, with long, complex sentences (she's role playing as part of a D/s scene). After a brief appeal to the Lit-powers-that-be, the story got published and is right now my highest rated fiction.

It was for the 750 word project, so it won't take long to read it if you're interested: https://literotica.com/s/bj4-telepathic
 
If you follow the news, you will know that Anthropic (the AI company that has probably done most to monetize their product) had a falling out with the Department of Defense. DOD wanted to be able to use Claude for mass surveillance and to run fully autonomous weapons (Terminator much?). Anthropic refused and was punished by being declared a supply chain risk by DOD. So any contractor who works with DOD can’t use Anthropic products.

However, more recently, OpenAI (behind ChatGPT and Microsoft Copilot) have signed a deal with the DOD. OpenAI claim they have redlines around mass surveillance and fully autonomous weapons, but that begs the question of why DOD agreed a contract with them and not Anthropic. The clauses of the DOD / OpenAI contract that have emerged (and I should state you can’t draw firm conclusions from only part of a contract) are viewed by legal experts as allowing for the very things that Anthropic refused to do.

This is not a political point I’m making here, it’s one about LLMs being used by our Government for purposes that - to say the very least - don’t have widespread popular support.

I just heard that a former colleague of mine, who was high up in the Open AI hierarchy, has resigned because of this agreement. So people inside OpenAI know this is BS, and those with integrity are not willing to accept it.
 
I will repost what I posted a week ago about my own experience:

I recently got my AI-rejected chapter accepted after two failed tries. Happy to share what worked and didn't work, for me.

First try was to simply resubmit with a note. It sat for a couple of weeks and then got rejected with no further info.

Second try was a light edit for the items mentioned here as generic AI flags. Things like rules of three, numbering, etc. Again, that took a few weeks and same outcome.

Finally, I did what I was told NOT to do by most people here. I used AI to actually get around the problem. Here is what I did:

1. I used two AI detector packages (humanize and gptzero) to flag AI trigger sentences. I went to my chapter and highlighted those.
2. I then asked two AIs (ChatGPT* and grok*) to identify the paragraphs/sections of my chapter that they thought were most likely to be AI products. I highlighted those.
3. I had very little confidence in either of the two above methods, but I figured, whatever was causing the flag, it was probably in there somewhere. At this point, nearly half my chapter was highlighted.
4. I simply deleted whole sections that had more blue than white, and then wrote those sections again from scratch.
5. Then I edited the whole chapter again, top to bottom, since it was now quite uneven.
6. I then edited again, and again, until I felt it was BETTER than the original I had submitted, since I was pissed off that I had probably butchered it in the process.

So, I submitted that one, with a note saying I had rewritten half of it from scratch. It was accepted in 24 hours.

** I should add that I buy the premium version of grok and the super-premium version of ChatGPT for work, so YMMV if you are using the free versions. They are a lot less capable.
 
Last edited:
I will repost what I posted a week ago about my own experience:

I recently got my AI-rejected chapter accepted after two failed tries. Happy to share what worked and didn't work, for me.

First try was to simply resubmit with a note. It sat for a couple of weeks and then got rejected with no further info.

Second try was a light edit for the items mentioned here as generic AI flags. Things like rules of three, numbering, etc. Again, that took a few weeks and same outcome.

Finally, I did what I was told NOT to do by most people here. I used AI to actually get around the problem. Here is what I did:

1. I used two AI detector packages (humanize and gptzero) to flag AI trigger sentences. I went to my chapter and highlighted those.
2. I then asked two AIs (ChatGPT and grok) to identify the paragraphs/sections of my chapter that they thought were most likely to be AI products. I highlighted those.
3. I had very little confidence in either of the two above methods, but I figured, whatever was causing the flag, it was probably in there somewhere. At this point, nearly half my chapter was highlighted.
4. I simply deleted whole sections that had more blue than white, and then wrote those sections again from scratch.
5. Then I edited the whole chapter again, top to bottom, since it was now quite uneven.
6. I then edited again, and again, until I felt it was BETTER than the original I had submitted, since I was pissed off that I had probably butchered it in the process.

So, I submitted that one, with a note saying I had rewritten half of it from scratch. It was accepted in 24 hours.
So you used AI to beat an AI rejection.
 
If you follow the news, you will know that Anthropic (the AI company that has probably done most to monetize their product) had a falling out with the Department of Defense. DOD wanted to be able to use Claude for mass surveillance and to run fully autonomous weapons (Terminator much?). Anthropic refused and was punished by being declared a supply chain risk by DOD. So any contractor who works with DOD can’t use Anthropic products.

However, more recently, OpenAI (behind ChatGPT and Microsoft Copilot) have signed a deal with the DOD. OpenAI claim they have redlines around mass surveillance and fully autonomous weapons, but that begs the question of why DOD agreed a contract with them and not Anthropic. The clauses of the DOD / OpenAI contract that have emerged (and I should state you can’t draw firm conclusions from only part of a contract) are viewed by legal experts as allowing for the very things that Anthropic refused to do.

This is not a political point I’m making here, it’s one about LLMs being used by our Government for purposes that - to say the very least - don’t have widespread popular support.

Check out the AI driven equipment the pentagon is buying from Palmer Luckey - inventor of the Oculus Rift who sold his VR headset for $2b, started Andrul and is now building advanced tech weapons.

https://www.anduril.com/
 
I just heard that a former colleague of mine, who was high up in the Open AI hierarchy, has resigned because of this agreement. So people inside OpenAI know this is BS, and those with integrity are not willing to accept it.
Integrity and AI companies seems not to mix so well. It’s been reported that Anthropic’s ‘principled stand’ was more to do with avoiding legal peril (who’s gonna get sued if autonomous weapons using their s/w kill innocent people?)
 
Back
Top