Using AI as a reviewer to improve my writing

So Literotica tries to keep out AI stories, which I am very happy about. Whether the AI rejection process is effective is not the topic of this post.
Instead of using AI to write stories, I have tried to use AI to review my stories so I can learn and improve. Researchers have suggested that AI is an excellent trainer because it allows you to create so-called "optimal challenge points".

Have they provided evidence to support that?

So I write a first version and then post it to GPT asking it for feedback on the writing style.

Asking GPT for feedback on your writing is kind of like asking Hugh Laurie for a medical diagnosis. He's worked hard on learning how to look like a doctor, he knows the words doctors use and can say Doctor Things like "10 ccs morphine, stat!" But his mission is not to give you accurate medical advice, it's to do a convincing impression of a doctor. Taking that advice might be bad for your health.
 
True enough, but there are endless ways to craft a prompt.

“What elements of this excerpt are bored housewives most likely to connect with? What parts might detract from their interest in continuing?”
I love that prompt. I modified it somewhat and applied it to my orginal test story.
What elements of this story are underfucked shy nerds most likely to connect with?

Awkward Sexual Exploration: The characters’ hesitancy and struggle to communicate fantasies mirror shy nerds’ own experiences.
Structured Dynamics: The clear rules of BDSM appeal to readers who find comfort in defined roles and boundaries.
Imaginative Role-Play: The prostitute scenario offers a safe, thrilling escape for readers hesitant to explore such fantasies in real life.
Validation and Approval: The male protagonist’s pride in the female character resonates with readers seeking affirmation in relationships.
Relatable Outsiders: The characters’ long-distance relationship and use of a second language may connect with socially isolated readers.
Public Scenarios: Exhibitionist scenes provide a vicarious thrill for those fantasizing about attention but too inhibited to seek it.

Pretty much spot on.
 
I'm not going to give you any grief, but I'd prefer that you not do that.
No worries. That was something, that did not work. And your texts will have been read by the ai engines anyway, because you published them on the literotica site, not because I put a few paragraphs of yours in a prompt.
 
Me:
Can you review stories on the Literotica website?


ChatGPT:
I can provide feedback or analysis if you share the text or give me a summary of the story you’re referring to, but I can’t directly browse or access content from Literotica or other similar websites. Let me know how you’d like to proceed!
 
Me:
Can you review stories on the Literotica website?


ChatGPT:
I can provide feedback or analysis if you share the text or give me a summary of the story you’re referring to, but I can’t directly browse or access content from Literotica or other similar websites. Let me know how you’d like to proceed!
just because it says it can't access the site doesn't mean it isn't accessing the site. This is likely one of the ring fences put up by the developers to try to prevent model poisoning. There will be massive ETL processes running in the background to feed the basilisk.
 
Way back when, Google Translate used to have a provision in its Terms of Use that said, effectively, everything you uploaded became their property. I had to point out to clients several times that all those confidential client memos they were running through Translate weren't quite as confidential as they hoped.

Same for DeepL. It says the professional version won't use your texts, but I've seen evidence to the contrary.
 
Way back when, Google Translate used to have a provision in its Terms of Use that said, effectively, everything you uploaded became their property. I had to point out to clients several times that all those confidential client memos they were running through Translate weren't quite as confidential as they hoped.

Same for DeepL. It says the professional version won't use your texts, but I've seen evidence to the contrary.
Two people can keep a secret - if one of them is dead.
 
AI writing tools scare me. So much of what made the greats great was nuance and perspective. If authors sacrifice their integrity at the alter of grammatical perfection and computer algorithms we’ll end up with one story. Computers will write one perfect story according to analysis and language use and that will be it. There will never be mention of that awkward moment when two flies start fucking on a protagonists elbow while she asks chatgp for musturbation suggestions
 
I think the two things go hand in hand. The more you pay attention to other people, the easier it should be to empathize with them. The more you empathize, the more interested you will become in watching them.
And you come to realise that everyone has a story to tell. And frequently that story will break you.

Last night I was speaking to the woman we call my Mum's evil twin. She's known me since the day I was born, I used to run around with her son when we were six and four years old.

Last night is the first time she ever told me that he was not her eldest, and that her eldest lived less than a day.

Everyone has a story, and often there just aren't enough tears for it, so they keep it locked away so it can't hurt them.
 
Scientifically, emotions are biochemical responses triggered by conditioning. Some are encoded in our genes, while others are shaped by the environment. Free will is a myth.

A kitten separated from its mother at birth will still instinctively lick its paw to groom its head or bury its waste in sand. If behavioral knowledge can be encoded in RNA helices, why not in digital code?

Humans are easy to predict once you get what makes them tick. They are quite transparent to the keen observer. It doesn’t take profound empathy to figure out why some troubled soul, starved of validation, might dedicate their free time to a platform where they can excel and feel acknowledged.

Most people prefer the comfort of illusion to the discomfort of truth. They are conditioned to participate in a relentless pursuit of status and material possessions, projecting a facade of happiness and artificial fulfillment.

Should a person on the spectrum, possessing extraordinary computational skills but unable to tie their shoes, be considered intelligent?

Could AI, in theory, ever be creative or think beyond mere calculation and mimicry? The answer is yes. But for now, AI development is driven by financial incentives. The software market is worth trillions, while creative writing generates only millions. This economic reality ensures that groundbreaking advancements in AI creativity remain a distant prospect.
 
Scientifically, emotions are biochemical responses triggered by conditioning. Some are encoded in our genes, while others are shaped by the environment. Free will is a myth.

A kitten separated from its mother at birth will still instinctively lick its paw to groom its head or bury its waste in sand. If behavioral knowledge can be encoded in RNA helices, why not in digital code?

Humans are easy to predict once you get what makes them tick. They are quite transparent to the keen observer. It doesn’t take profound empathy to figure out why some troubled soul, starved of validation, might dedicate their free time to a platform where they can excel and feel acknowledged.

Most people prefer the comfort of illusion to the discomfort of truth. They are conditioned to participate in a relentless pursuit of status and material possessions, projecting a facade of happiness and artificial fulfillment.

Should a person on the spectrum, possessing extraordinary computational skills but unable to tie their shoes, be considered intelligent?

Could AI, in theory, ever be creative or think beyond mere calculation and mimicry? The answer is yes. But for now, AI development is driven by financial incentives. The software market is worth trillions, while creative writing generates only millions. This economic reality ensures that groundbreaking advancements in AI creativity remain a distant prospect.
Humans are not computer programs. Even the most lowly of us "predictable" creatures can elect to behave entirely at odds with prior behaviours.

And none of that intrinsic nonlinearity is encoded in our DNA.

> Free will is a myth.

What a strange world you inhabit.
 
Humans are not computer programs. Even the most lowly of us "predictable" creatures can elect to behave entirely at odds with prior behaviours.

And none of that intrinsic nonlinearity is encoded in our DNA.

> Free will is a myth.

What a strange world you inhabit.
I think he's using AI. Feeds it "Make me sound like Literotica Yoda" and his posts are what comes out.
 
Me:
Can you review stories on the Literotica website?


ChatGPT:
I can provide feedback or analysis if you share the text or give me a summary of the story you’re referring to, but I can’t directly browse or access content from Literotica or other similar websites. Let me know how you’d like to proceed!
I suspect this would be called a 'white lie' by a human. ChatGPT is telling you it honours the 'robots.txt' file on the site (that blocks it) but I'm sure the text has been harvested in the building of earlier revisions of the underlying model.

See https://www.literotica.com/robots.txt
 
I suspect this would be called a 'white lie' by a human. ChatGPT is telling you it honours the 'robots.txt' file on the site (that blocks it) but I'm sure the text has been harvested in the building of earlier revisions of the underlying model.

See https://www.literotica.com/robots.txt
Literotica is one of the largest publicly available, curated sources of English Language smut going. It's naive to assume it's not a significant part of the corpus of most of these LLMs
 
I suspect this would be called a 'white lie' by a human. ChatGPT is telling you it honours the 'robots.txt' file on the site (that blocks it) but I'm sure the text has been harvested in the building of earlier revisions of the underlying model.

See https://www.literotica.com/robots.txt
I'm not sure if this is even about robots.txt (or other Literotica-specific exclusions) or just that it's running on a pre-trained model that doesn't have live access to websites.

What makes me suspicious is that GPT implementations often have a "was this response helpful?" button. The usual point of such things is to tweak the algorithm - give more of the responses people find helpful, fewer of the ones they don't - and it's hard to see how that could be accomplished without feeding back some of the session content.

Humans are not computer programs. Even the most lowly of us "predictable" creatures can elect to behave entirely at odds with prior behaviours.

The fact that many humans can write computer programs is pretty good evidence of that. We have evolved for flexibility, the capability to learn new things and go new places.
 
Scientifically, emotions are biochemical responses triggered by conditioning. Some are encoded in our genes, while others are shaped by the environment. Free will is a myth.

A kitten separated from its mother at birth will still instinctively lick its paw to groom its head or bury its waste in sand. If behavioral knowledge can be encoded in RNA helices, why not in digital code?

Humans are easy to predict once you get what makes them tick. They are quite transparent to the keen observer. It doesn’t take profound empathy to figure out why some troubled soul, starved of validation, might dedicate their free time to a platform where they can excel and feel acknowledged.

Most people prefer the comfort of illusion to the discomfort of truth. They are conditioned to participate in a relentless pursuit of status and material possessions, projecting a facade of happiness and artificial fulfillment.

Should a person on the spectrum, possessing extraordinary computational skills but unable to tie their shoes, be considered intelligent?

Could AI, in theory, ever be creative or think beyond mere calculation and mimicry? The answer is yes. But for now, AI development is driven by financial incentives. The software market is worth trillions, while creative writing generates only millions. This economic reality ensures that groundbreaking advancements in AI creativity remain a distant prospect.


This is why you're not a writer.
 
Humans are not computer programs. Even the most lowly of us "predictable" creatures can elect to behave entirely at odds with prior behaviours.

And none of that intrinsic nonlinearity is encoded in our DNA.

> Free will is a myth.

What a strange world you inhabit.
You were made by just two strands of nucleotides composed of nitrogen-containing bases: adenine, guanine, thymine, and cytosine. Unless you're willing to bring God into the equation, you are, in essence, a program--one that any future AI (or keen observer) could easily decipher.

Programs lack free will, and their predictability renders them uninteresting.
 
Programs lack free will, and their predictability renders them uninteresting.
I'm curious what you think that makes you then.

Here's my opinion. Humans and certain other highly evolved species have enough intelligence to bypass their instincts. Thus: free will. No need for any All-Father

AI is a construct that is excellent at recombining things it already knows into things that appear novel but are clearly derivative when you know where to look. AI will not ever create anything new, nor will it ever attain free will because it is, ultimately, a tool. It just happens to be a very smart tool that occasionally tells you to glue crumpets to your horse in order to cure your dog's case of rabies.

Good luck with your dreams of the technological Singularity. I'll keep drawing in the dust with a stick, it's much more satisfying and I can always use the stick to light a fire.
 
Back
Top