Using AI as a reviewer to improve my writing

Those in themselves are, notoriously, conflicts of subjective opinion: rhythm, tone, sentence complexity, word choice.

I'd ask myself, whether, if his job didn't require that he believe in God, would the Pope believe in God with such certainty?
Maybe he's the Pope because he believes with such certainty? And maybe I know what I'm talking about from years of study and decades of practice?
 
Note that LLM's are quite good at answering questions about themselves: For example, try entering this prompt into an LLM (ChatGPT, Copilot, Llama 3.3.) "How can I create system prompt which ensures that your factual answers are highly reliable?"
Copilot returns guidance similar to that which teachers now give their pupils on how to avoid misinformation being provided by malicious or stupid humans over the internet.
 
So Literotica tries to keep out AI stories, which I am very happy about. Whether the AI rejection process is effective is not the topic of this post.
Instead of using AI to write stories, I have tried to use AI to review my stories so I can learn and improve. Researchers have suggested that AI is an excellent trainer because it allows you to create so-called "optimal challenge points".
So I write a first version and then post it to GPT asking it for feedback on the writing style. It has helped to bias the prompt towards being critical. So I might use this prompt:


So applied to my story https://www.literotica.com/s/a-whore-for-one-evening (Rated 3.72) it told me:

So there you have it. Way to go.
If you leave out "short minimal review" from the prompt, it gets a bit more detailed.

Another prompt I found helpful is:



What you should not do is to ask the AI for suggestions on how to improve the text. Doing so will provide suggestions that are so bad that I am afraid they will send me in the direction of overwrought, clicheed purple prose.

Has anybody else tried to use AI similarly? What are your experiences and useful prompts?
AI is so rubbish and it’ll be the death of us. Mark my words.
 
Last edited:
ChatGPT:
Your poem captures a strong sense of isolation and emotional longing. Here are a few thoughts:

Strengths:

1. Emotional resonance: The imagery and tone effectively convey a sense of disappointment and loneliness, making the reader empathize with the speaker’s situation.


2. Rhythmic flow: The poem has a natural rhythm that suits its contemplative and melancholic theme.


3. Symbolism: The “wrong station” serves as a powerful metaphor for missteps in life or relationships, which many readers can relate to.


Suggestions for improvement:

1. Clarify the setting: The transition from “train” to “platform” is clear, but expanding on the environment could deepen the sense of atmosphere—perhaps describing the coldness or emptiness of the platform.

2. Refine the ending: The final lines are impactful but could be slightly restructured for a stronger conclusion. For example:

• “And none of you care / that I’ve arrived / at the wrong station.”


3. Word choice: Consider replacing “cold apathy” with something more visual or tactile to enhance the sensory impact (e.g., “the gray, lifeless crowd” or “an air of disinterest”).

4. Connection to “the one I love”: The mention of this person raises questions. Are they truly absent, or is this a feeling of emotional detachment? A hint could add depth.

Would you like suggestions on how to further explore the metaphor or tighten the rhythm?
I'd like to take a moment and talk about this, the most detailed and meaningful feedback you've gotten in your entire life.

For one thing, I disagree that it's a poem about disappointment and loneliness. The poem scans to me more like the kind of thing that leaves you smacking your forehead and saying doh than a deep treatise on the fundamental loneliness of existence. I disagree that the natural rhythm suits, or has anything to do with the themes, whether those themes are contemplative and melancholic OR a bit of lighthearted befuddlement. Or that the rhythm is "natural." It's just rhythm, and that's fine. The "wrong station" is categorically a metaphor, and it is something readers can relate to, but I disagree that it's a powerful one (if we're defining terms, a powerful metaphor, to me, should work on multiple layers and be about different things). I also disagree that this relates to relationships; it's just a metaphor for mistakes, which makes it more universally applicable than anything to do with interpersonal connection.

As far as the suggestions, they are definitely suggestions. Following them will change your output. I don't understand how a broader explanation of the platform will deepen the sense of atmosphere. You already described it as being cold, so I don't know what the AI thinks you'll gain by being more specific. I disagree that the AI's suggestion for the ending is better than your existing one.

Does this AI response really give you something to chew on, or is the problem that meaningful feedback about poetry is so hard to get in the first place that you've been starved for your adult life? Only you know what this poem was really about, what you had in your heart, and so if you say "The AI had it right and you've got it wrong" then nobody can really gainsay you by disagreeing, but I don't think this is the win it could have been if the AI had understood your poem as much as it understands "things that can describe a poem."

EDIT: I don't think the thread should really pivot to be about this poem. It's good poetry, but there's very little to be gained as it relates to this conversation by dissecting it except to do so in contrast to what the AI says about it.

EDIT THE SECOND: I also wanted to make it clear that although I am disagreeing with feedback that is very positive and enthusiastic, I am not taking the position that the poem is bad or in any way lacking. I think that the effusive word choices the AI uses are not applicable, not that the poem isn't good.
 
Last edited:
I also arrive at a different interpretation of the poem. I read it as Irony. The first difference to the AI is that I have gotten off a train at the wrong station more than once. So I know how it feels like first hand. And I did not feel Weltschmerz. I was simply pissed.
Going up a stair, where it is obvious that the person I want to meet can never be is so silly, that i consider it irony. And of course none of the other people care, that you are at the wrong station. So stating the obvious also feels like irony to me.
 
I'd like to take a moment and talk about this, the most detailed and meaningful feedback you've gotten in your entire life.

For one thing, I disagree that it's a poem about disappointment and loneliness. The poem scans to me more like the kind of thing that leaves you smacking your forehead and saying doh than a deep treatise on the fundamental loneliness of existence. I disagree that the natural rhythm suits, or has anything to do with the themes, whether those themes are contemplative and melancholic OR a bit of lighthearted befuddlement. Or that the rhythm is "natural." It's just rhythm, and that's fine. The "wrong station" is categorically a metaphor, and it is something readers can relate to, but I disagree that it's a powerful one (if we're defining terms, a powerful metaphor, to me, should work on multiple layers and be about different things). I also disagree that this relates to relationships; it's just a metaphor for mistakes, which makes it more universally applicable than anything to do with interpersonal connection.

As far as the suggestions, they are definitely suggestions. Following them will change your output. I don't understand how a broader explanation of the platform will deepen the sense of atmosphere. You already described it as being cold, so I don't know what the AI thinks you'll gain by being more specific. I disagree that the AI's suggestion for the ending is better than your existing one.

Does this AI response really give you something to chew on, or is the problem that meaningful feedback about poetry is so hard to get in the first place that you've been starved for your adult life? Only you know what this poem was really about, what you had in your heart, and so if you say "The AI had it right and you've got it wrong" then nobody can really gainsay you by disagreeing, but I don't think this is the win it could have been if the AI had understood your poem as much as it understands "things that can describe a poem."

EDIT: I don't think the thread should really pivot to be about this poem. It's good poetry, but there's very little to be gained as it relates to this conversation by dissecting it except to do so in contrast to what the AI says about it.

EDIT THE SECOND: I also wanted to make it clear that although I am disagreeing with feedback that is very positive and enthusiastic, I am not taking the position that the poem is bad or in any way lacking. I think that the effusive word choices the AI uses are not applicable, not that the poem isn't good.


(Thanks for the feedback, AwkDoc. I have not received any more detailed literary feedback on this poem from a human than what you wrote. )

It was written as metaphor and I always thought of it being written on two levels. To me it has often served as a sort of Rorcharch test for determining if someone has a melancholic and empathetic mind set or if they take things more at face value.


What clues did the AI pick up on that recognized metaphor? The piece could have been taken as literal - as some people have, but the AI specifically mentioned how the metaphorical perspective could be interpreted.

I wondered if the AI recognized the metaphor because I referred to it as a ‘poem’, so I tried a new prompt in a new thread where I referred to it as a ‘story’ rather than a poem.

ME: For your responses in this thread I’d like you ignore any of our previous interactions including any instructions or preferences I have previously mentioned.

Provide feedback for the story below:

———


I step off the train
and I know right away
that I'm miles from my destination.

I walk up the stairs
to the platform above
where the one that I love
should be waiting.

But all I can see
is cold apathy
and I'm sadly aware
that none of you care
that I've gotten off
at the wrong station.


The AI response still identifies it as metaphorical and it mentions how the rhythmic lines and lyrical quality contribute to a reflective tone.

ChatGPT:
Your story has a melancholic tone that is effectively conveyed through its simplicity and repetition. Here’s some constructive feedback:


1. Atmosphere and Imagery: The imagery of stepping off a train and realizing you’re at the wrong station works as a metaphor for disconnection or missed opportunities. However, the scene could benefit from more sensory details to immerse the reader. For example, what does the station look like? Are there specific sounds or smells that reinforce the sense of wrongness or loneliness?


2. Emotional Depth: The line “cold apathy” is evocative, but it might feel abstract. You could strengthen the emotional impact by showing how the apathy manifests—perhaps through descriptions of the people at the station or the physical environment.


3. Pacing and Structure: The short, rhythmic lines create a lyrical quality, which suits the reflective tone. However, the ending might feel abrupt. Consider expanding on the emotional realization or what the narrator might do next—does this mistake spark action or acceptance?


4. Perspective: The use of “none of you care” adds a confrontational element, directly addressing the audience or the crowd at the station. This could be explored further to enhance the narrator’s sense of isolation or bitterness.


Overall, the piece effectively sets a mood but could be more vivid and specific to deepen the reader’s emotional connection.


So we have an AI that recognizes metaphor?

At the very least this shows that it considers more than just the next most likely word - or is use of metaphor more likely than literal meaning?

I asked:

ME: Do you make decisions for your responses using a Markov chain?

The response gives some detail about how much context the AI actually uses in creating its responses and as far as I can tell, the breadth and depth of this context is only limited by programming to prioritize available processing power.

ChatGPT:
No, I don’t use a Markov chain to generate my responses. My responses are generated using a transformer-based neural network, which is fundamentally different from a Markov chain.

Key Differences:

1. Markov Chain:

• Decisions depend only on the current state (or the most recent input) due to its memorylessness property.

• It uses predefined probabilities for transitions between states.


2. Transformer-based Models (like GPT):

• Decisions depend on a context window of multiple preceding words, not just the most recent one. This allows for more nuanced and coherent outputs.

• They don’t rely on fixed probabilities but rather dynamically compute probabilities using learned weights, attention mechanisms, and vast amounts of training data.

In short, my responses are influenced by patterns and relationships learned from large datasets, rather than simple probabilistic state transitions. However, if you’d like, I can simulate a Markov-chain-based response to illustrate how it might differ!

What we get from the free version today was only available in the premium version yesterday. What is the depth and breadth of available context going to be ten years from now?

It’s still a work in progress - very fast paced progress.
 
Last edited:
(Thanks for the feedback, AwkDoc. I have not received any more detailed literary feedback on this poem from a human than what you wrote. )

It was written as metaphor and I always thought of it being written on two levels. To me it has often served as a sort of Rorcharch test for determining if someone has a melancholic and empathetic mind set or if they take things more at face value.


What clues did the AI pick up on that recognized metaphor? The piece could have been taken as literal - as some people have, but the AI specifically mentioned how the metaphorical perspective could be interpreted.

I wondered if the AI recognized the metaphor because I referred to it as a ‘poem’, so I tried a new prompt in a new thread where I referred to it as a ‘story’ rather than a poem.




The AI response still identifies it as metaphorical and it mentions how the rhythmic lines and lyrical quality contribute to a reflective tone.




So we have an AI that recognizes metaphor?

At the very least this shows that it considers more than just the next most likely word - or is use of metaphor more likely than literal meaning?

I asked:



The response gives some detail about how much context the AI actually uses in creating its responses and as far as I can tell, the breadth and depth of this context is only limited by programming to prioritize available processing power.



What we get from the free version today was only available in the premium version yesterday. What is the depth and breadth of available context going to be ten years from now?

It’s still a work in progress - very fast paced progress.
Let's be honest, though; you don't know that it recognizes metaphor. You could open up just about every single book of poetry ever printed, open to a random page, and say "Oh look, a metaphor about relationships", and you would be half-right 99% of the time. This is not the same as having the cognizance to understand what it means for something to be a metaphor. It is flattering to receive feedback that says "Hey, that thing you were going for? I recognize it", but I'm not sure this qualifies as actual recognition since it misidentified the thing it was a metaphor for.
 
Humans are pattern recognition machines, and science has taught us that a theory can be counted as good when it is able to predict, right? General Relativity achieved widespread scientific acceptance when we started looking into the speed of light, lensing, black holes, etc. So, we throw a poem at AI and say "tell me about this" and when it comes back with poetry terminology there's an impulse to attribute the AI an understanding of those concept. I don't believe that what you've shown us demonstrates real understanding.
 
It’s still a work in progress - very fast paced progress.
1) I would be wary of letting LLM AI fool you with the idea that it can ever really overcome this kind of understanding deficiency. All this kind of AI will ever do is parrot convincingly. It is not thinking.

Have you ever watched this?


This is a great video essay on the metaverse, and one of the things that defenders always say is "Just wait until VR shows up. This is early, and it's going to get so much better." Don't let yourself get sucked in by the unrealistic promise.

2) I want to reiterate something from my very first post in this thread, that I use AI TTS myself in the editing stages of my own work. I am not some kind of anti-AI evangalist who only sees this issue in black and white.
 
Two separate things.
That's the point @StillStunned is making.

If the term Artificial Intelligence had never been coined, then the discussion would be different. Intelligence is the key word here, with its sentient implications. Some people are going out of their way to see sentience (thinking) in a machine's response, when there's no evidence for it.

If AI had been popularised as an "Analytical Engine" or some such, the public response wouldn't have been, "Oh my god, the machines are going to take over, this is the end of humanity," it would have been, "What are the computer scientists up to now?" Or their eyes would have glazed over.

There's nothing here that demonstrates self awareness or thinking, let alone intelligence. Mimicking at best, through word prediction and collation. It's still only ones and zeros, just lots of them, and not all in the right order.

And the bloody thing still can't draw hands. There's a bunch of people who don't seem to see that, either. Some of them call themselves artists, which is a bit sad.
 
That's the point @StillStunned is making.

If the term Artificial Intelligence had never been coined, then the discussion would be different. Intelligence is the key word here, with its sentient implications. Some people are going out of their way to see sentience (thinking) in a machine's response, when there's no evidence for it.

If AI had been popularised as an "Analytical Engine" or some such, the public response wouldn't have been, "Oh my god, the machines are going to take over, this is the end of humanity," it would have been, "What are the computer scientists up to now?" Or their eyes would have glazed over.

There's nothing here that demonstrates self awareness or thinking, let alone intelligence. Mimicking at best, through word prediction and collation. It's still only ones and zeros, just lots of them, and not all in the right order.

And the bloody thing still can't draw hands. There's a bunch of people who don't seem to see that, either. Some of them call themselves artists, which is a bit sad.
That’s the thing – how do you define sentience? If it’s consciousness then that kind of computing will most definitely have huge implications for humanity. Looking alt the commercially available or free AI with yours hands observation is like judging humanity on Neolithic man. Two vastly different things.

AI is a big scary thing potentially.
 
As an example I asked ChatGPT to review one of my poems:
[snip]

The response:

1. Emotional resonance: The imagery and tone effectively convey a sense of disappointment and loneliness, making the reader empathize with the speaker’s situation.

If I were writing a poem of this kind, I certainly would want to know whether I'd conveyed that feeling effectively.

The problem with feedback from GPT is that it can tell me that I did, or that I didn't, but there's no particular reason why I should believe that. For all I know, "you effectively conveyed this tone" is the poetry equivalent of "it's a farming town so let's assume it has a sheaf of wheat on its coat of arms". It's entirely possible, but GPT doesn't know.

If anything, this is a tougher question than the coat-of-arms one. In that case, GPT has at least been trained on material that includes the correct answer (even if it doesn't have the capacity to memorise that level of detail) and with some tweaking it can probably be programmed to go fetch what's in the Wikipedia entry and regurgitate that. But with a poem that's never been analysed before, there's nothing to crib from. It has to respond to something outside its training data, which is something machine-learning algorithms often find hard.

1. Clarify the setting: The transition from “train” to “platform” is clear, but expanding on the environment could deepen the sense of atmosphere—perhaps describing the coldness or emptiness of the platform.

Needs more coldness, okay...

2. Refine the ending: The final lines are impactful but could be slightly restructured for a stronger conclusion. For example:

• “And none of you care / that I’ve arrived / at the wrong station.”

Proposed changes:

But all I can see
is cold apathy
and I'm sadly aware
And none of you care
that I've arrived
at the wrong station.

(The response isn't clear on whether the "and I'm sadly aware" line is to be deleted, but grammatically it would need to be, since the change from "that none of you care" to "and none of you care" leaves the "sadly aware" without its dependent.)

GPT asserts that this version is stronger, but doesn't offer any reasons why. Is it clear to you that this version improves the poem? Or is it just suggesting change for the sake of change? (I see AMD has noted the same issue.)

If all you want is a prod to make you ask yourself questions like "does this adequately convey the mood?" or "what if I tinkered with the wording?" a LLM can give you that. But when it professes to have the answers to those questions, I'd distrust it.

One point not acknowledged by GPT here is that "gotten off" carries a sexual double entendre, particularly in proximity to "the one I love". Is the poem better or worse for that double entendre? Subjective question, depends on your intentions. But it's a question that ought to be considered when tweaking the wording, and GPT fails to acknowledge it.

(In many documents, that particular double entendre would be undesirable and changing "gotten off" to "arrived" would be encouraged to prevent any possibility of lewd interpretations; it's possible that GPT has learned from those cases and is applying the same preference here without adjusting for context. But I'm just speculating.)

The effect on rhythm is also something that ought to be considered - changing "gotten off" to "arrived" replaces three beats (syllables), two of them stressed, with two syllables, one stressed. Is that bad or good? Depends. But GPT hasn't even acknowledged the question.

3. Word choice: Consider replacing “cold apathy” with something more visual or tactile to enhance the sensory impact (e.g., “the gray, lifeless crowd” or “an air of disinterest”).

...needs less coldness.

Also, "an air of disinterest" is less tactile/sensory than "cold apathy".

Word salad? Markov chain? No.

"Generalised version of the Markov-chain concept with bells and whistles attached" might be a more accurate description. But the underlying idea - "given this sequence of text, what are the most probable continuations?" - is identical to that behind Markov chains.
 
Last edited:
I think the fundamental difference is that @AlexBailey is asking subjective questions that will result in subjective answers with a higher tolerance for deviation and where oddness can be finessed or is even in some cases desirable.

whereas @Bramblethorn is asking a question which has only one correct answer, and the AI is failing miserably to provide it.

There are actually two correct answers here. The other would've been "I don't know", which is a fine response for this and many other questions, but not the kind of answer that attracts venture capital.
 
Of course it is not just a markov chain.
It is more complex, but it is still mechanical.
It has incorporated a large number of texts, among them many reviews. In these reviews notions like emotional depth, pacing, atmosphere, imagery occur frequently. And that metaphors (along with show-dont-tell, simile,... ) are play a role in it. Then it has many texts, that explain that certain words and situations are seen as metaphors for disconnection or missed opportunities. Really a missed connection as a metaphor for disconnection? Wow that is really unusual depth.
So the AI can recognize metaphors, because the texts it ingested provide samples of metaphors, that match words in your text.
 
There are actually two correct answers here. The other would've been "I don't know", which is a fine response for this and many other questions, but not the kind of answer that attracts venture capital.
But if a machine can be made to learn the value of admitting its ignorance... maybe we can, too.
*Terminator theme plays as a dark highway fades into black*
 
That's the point @StillStunned is making.

If the term Artificial Intelligence had never been coined, then the discussion would be different. Intelligence is the key word here, with its sentient implications. Some people are going out of their way to see sentience (thinking) in a machine's response, when there's no evidence for it.

If AI had been popularised as an "Analytical Engine" or some such, the public response wouldn't have been, "Oh my god, the machines are going to take over, this is the end of humanity," it would have been, "What are the computer scientists up to now?" Or their eyes would have glazed over.

There's nothing here that demonstrates self awareness or thinking, let alone intelligence. Mimicking at best, through word prediction and collation. It's still only ones and zeros, just lots of them, and not all in the right order.

And the bloody thing still can't draw hands. There's a bunch of people who don't seem to see that, either. Some of them call themselves artists, which is a bit sad.

Seeking to redefine it won't make it go away.

I see no evidence of sentience in your response but redefining you as an EBot is neither here nor there. Your response is emotional, not rational. You know nothing about LLMs, however you'd choose to name them. From the above, 'It's still only ones and zeros, just lots of them, and not all in the right order.' it's apparent you know nothing about Human Cognition either.

You are what you write. I'll file you under, intellectually deranged, AI offends his religious beliefs.
 
When I studied AI in the late 1970s (!) I, and most of my tutors came from psychology and philosophy backgrounds. At that time we were interested in what could be learned about human cognition by attempting to model it in software. I won't recapitulate the history of AI since then (and before that, starting with the work of Von Neumann and Turing), but, to sum up, AI went into the doldrums, and then sprouted a vigorous new branch with the advent of modern machine learning via backpropagation. It's now become a very useful tool for solving many kinds of problems that used to be particularly difficult to do in software (like weather forecasting). The remarkable results of transformer networks appeared towards the end of the last decade, with their tantalisingly human-like outputs, apparently making deep connections and understanding contexts. The question remains whether their inner workings shed any light on human cognition.
I believe they help us understand how we process language. But of course we do a lot more than that. A simple (and they are actually quite simple) transformer network won't be able to do much in the way of reasoning. Human (and animal) brains have dedicated reasoning areas which are apart from language. And of course we're goal directed ("intentional" is the philosphical term), and we have emotions and other sense organs through which we make sense of the world. We can also transfer short-term learned information into long-term knowledge in real time: Chat GPT is pre-trained, and your chats aren't used to add to its knowledge (in real-time, that is -- no doubt OpenAI does gather data from chats and add it to its training datasets for subsequent models).
 
Back
Top