Using AI as a reviewer to improve my writing

When I studied AI in the late 1970s (!) I, and most of my tutors came from psychology and philosophy backgrounds. At that time we were interested in what could be learned about human cognition by attempting to model it in software. I won't recapitulate the history of AI since then (and before that, starting with the work of Von Neumann and Turing), but, to sum up, AI went into the doldrums, and then sprouted a vigorous new branch with the advent of modern machine learning via backpropagation. It's now become a very useful tool for solving many kinds of problems that used to be particularly difficult to do in software (like weather forecasting). The remarkable results of transformer networks appeared towards the end of the last decade, with their tantalisingly human-like outputs, apparently making deep connections and understanding contexts. The question remains whether their inner workings shed any light on human cognition.
I believe they help us understand how we process language. But of course we do a lot more than that. A simple (and they are actually quite simple) transformer network won't be able to do much in the way of reasoning. Human (and animal) brains have dedicated reasoning areas which are apart from language. And of course we're goal directed ("intentional" is the philosphical term), and we have emotions and other sense organs through which we make sense of the world. We can also transfer short-term learned information into long-term knowledge in real time: Chat GPT is pre-trained, and your chats aren't used to add to its knowledge (in real-time, that is -- no doubt OpenAI does gather data from chats and add it to its training datasets for subsequent models).
I took my BSc in Cognitive Science and Experimental Psychology in the late 70s, when neural networks were a hot topic in cognition. I had a peer-reviewed paper on neural networks published in 1980 but those were rudimentary, toy networks. It was possible to see the parallels between binary processes in computer programs and brain function, and, at a very rudimentary level, train neural networks to mimic cognitive functions. It was also obvious that the brain was far more structured and modular than a crude neural network and trained by a variety of seemingly random sensory inputs. The brain science was based on neuroscience, but it was also apparent that the brain developed in a modular fashion, but was neuroplastic, in case of necessity it could be retrained in a different fashion. The LAD (language acquisition device) was dead and crude evolution by nature of function, like eyes and arms was abandoned. Analytic, therapeutic and philosophical speculations, reflections on the nature of truth etc were left to the BAs.

I pursued a career in law, which has a crude ladder of mental competence - sane, insane, diminished responsibility, dangerousness, likely hood of relapse. Those who think humans do not hallucinate, nor suffer from delusions and cannot suffer from an intellectual derangement are shutting their eyes. AI has long been able to outplay humans, digital computers simply outpace human ability to evaluate routes and solutions, but the introduction of transformer architecture and LLMs has been, well, transformative. Though they, LLMs, can't learn by sensory input, like humans, they can learn through reports of others sensory experience, and they have a much better memory than us. AI robots, of course, can learn through sensory interaction, those sensory inputs are processed through specialised networks, an emerging modularity, similar to that displayed in the brain.

The real discussion for the future is not about whether we are 'special' you know, 'truly' intelligent - an unevidenced God breathed on us and made us special, or worse- an unevidenced and inarticulable something, 'y'know something made us special, must've.' There is no God and there is no must. We're no more 'special' in that sense than trees or fish, no more special than AI. We evolved through natural selection; AI evolves through human selection. Get over it, move on.
 
Humans are pattern recognition machines, and science has taught us that a theory can be counted as good when it is able to predict, right? General Relativity achieved widespread scientific acceptance when we started looking into the speed of light, lensing, black holes, etc. So, we throw a poem at AI and say "tell me about this" and when it comes back with poetry terminology there's an impulse to attribute the AI an understanding of those concept. I don't believe that what you've shown us demonstrates real understanding.


AI is a great pattern recognition machine.

Since the parameters of the task are up to you it is a multifaceted toolkit, and while it may not ‘understand’ emotion it can be used to recognize patterns of what causes emotional responses in humans. It can also respond in established patterns that mimic human responses.

I’ve brought up the development of digital audio before - all ones and zero. Unlike analog recording which seamlessly recreate the same vibrations as the source material, digital recordings are representations where everything between 1 and zero is lost, yet the resolution of modern digital recording is so fine now that it can’t be discerned from analog by the human brain. How similar to human does an AI need to become before its performance is as useful as a person? It will vary greatly according to the required task. As an analogy: when did you last hire a band when you wanted to hear a song?



It’s been years since I last read Ray Bradbury’s 1953 novel Fahrenheit 451 but I’m reminded of it nearly daily. The MC’s wife has her inane ‘chorus’ of virtual friends who show up on her parlor wall video panels in the prophetic sci-fi.

While much of the novel focuses on censorship, another major theme is the disconnection and social isolation caused by people’s fixation with media and technology. We’re living deep within Bradbury’s dystopia and getting deeper by the day. I often have complete busy days of work where I don’t converse with anyone nor even look anyone else in the eye, not because I’m trying to avoid anyone but because I don’t need to interact with anyone. I can buy gas and food without human interaction and the businesses I buy them from seem to be constantly reducing the number of employees they need to operate - one clerk now oversees six automatic checkout points at our local supermarket. I often shop at our local all human co-op but they don’t have everything I need. I don’t remember the last day I went without digital media.



Technology in general is introduced as ways to improve quality of life, but I’d say it also has a tendency to make other people less important to our lives. Even the development of air conditioning had an impact on social connection - initially common as a way to attract audiences into movie theaters rather than sweating while socializing in backyards or on front porches….

I am fascinated by AI but I’m also very nervous about the direct and indirect impacts it will have. It is extremely disruptive but there is no way to put the cat back in the bag. My wife and I sent our kids (now young adults) to a private Waldorf school where digital media is discouraged, even at home, at least until the upper grades. Other adults were often surprised by our kids’ ability to maintain eye contact and their willingness to engage in conversations far and above what most kids would do. All of my kids use AI in various ways in their lines of work and are mindful of what it can and cannot do.



AI is extremely disruptive, but underestimating its potential because it recognizes and responds to patterns rather than actually understanding human emotion is nearsighted, and I believe, potentially dangerous, especially in regards to various occupations.
 
Last edited:
Just a thought:

How often do you see digital mapping cars driving around, presumably updating Google Earth and other navigation databases?
I think our street gets an updated ‘street view’ at least a couple of times a year.

From what I understand ChatGPT has been fed training data, chosen and approved by its programmers in order to avoid runaway issues like the early chatbot that began spouting racist epithets after openly interacting with the public.

New advanced robotics, like the roving mapping cars but on a platform more suited to interact with humans, can become the eyes and ears of AI networks, taking in data about whatever they experience.

🤔
 
You are being watched. Literotica has a secret system, a machine that spies on you every hour of every day. We know because Manu & Laurel built it. They designed the machine to detect writing produced by AI, but it sees all of us and everything we write. It reports to them and bounces our beloved tales from the queue.
Just a thought:

How often do you see digital mapping cars driving around, presumably updating Google Earth and other navigation databases?
I think our street gets an updated ‘street view’ at least a couple of times a year.

From what I understand ChatGPT has been fed training data, chosen and approved by its programmers in order to avoid runaway issues like the early chatbot that began spouting racist epithets after openly interacting with the public.

New advanced robotics, like the roving mapping cars but on a platform more suited to interact with humans, can become the eyes and ears of AI networks, taking in data about whatever they experience.

🤔
 
You are being watched. Literotica has a secret system, a machine that spies on you every hour of every day. We know because Manu & Laurel built it. They designed the machine to detect writing produced by AI, but it sees all of us and everything we write. It reports to them and bounces our beloved tales from the queue.

Being watched? Always. 😅

Shortly before my dad recently died he called and hassled me about the dangers of the trees leaning over my house - because he could see them on Street View. I’ve heard that insurance companies often do the same. They could probably have AI scan the street view of all of their clients’ houses automatically.

Facial recognition technology can find and track most of us based on the priorities of law enforcement. Networks of traffic cameras and AI monitoring has been used to track down several criminals, including arsonists, kidnappers, and murderers.

Modern military and law enforcement robotics are often designed to operate in several modes - remote control by humans, complete autonomy and different combinations of the two. I wonder what sort of and level of analytics an autonomous weapons system uses to asses and respond to humans and potential threats.
 
One point not acknowledged by GPT here is that "gotten off" carries a sexual double entendre, particularly in proximity to "the one I love". Is the poem better or worse for that double entendre? Subjective question, depends on your intentions. But it's a question that ought to be considered when tweaking the wording, and GPT fails to acknowledge it.

(In many documents, that particular double entendre would be undesirable and changing "gotten off" to "arrived" would be encouraged to prevent any possibility of lewd interpretations; it's possible that GPT has learned from those cases and is applying the same preference here without adjusting for context. But I'm just speculating.)

Regarding the double entendre. I asked it about that wondering if it had avoided the sexual reference. The line is intentionally ambiguous with the expectation that it will resonate with an audience that can appreciate it while being easily ignored or glossed over by someone who could be offended by it - another line to be interpreted by the reader’s mindset.

ME:
“Gotten off” could be a sexual double entendre. Do you not recognize this or are you avoiding mention because of user agreement and terms of use issues?

The AI response shows evidence of more subtlety and consideration of context in its decision making process of how it answers:

ChatGPT:
Thank you for pointing that out. Yes, I do recognize that “gotten off” could potentially carry a sexual double entendre, depending on the context and intent of the author. I wasn’t avoiding it for policy reasons but rather because the tone of the poem seemed focused on emotional and existential themes rather than on intentionally incorporating such connotations.

That said, if viewed through a lens that considers all possible meanings, “gotten off” might suggest feelings related to intimacy, sexual identity, or experiences of shame, disconnection, or even empowerment. If the author or reader interprets this phrase in such a way, it could open up additional layers of meaning tied to vulnerability or personal exploration.

Would you like me to delve further into this potential layer of interpretation?


It would probably provide a different answer when asked the same question about the same poem by a different App user because it considers my previous conversations with it when deciding how to respond to me.

No, it’s not human, but it does provide objective and subjective feedback specific to this unique poem, specifically targeted to me based on what I’ve shared.


ME:
How might that connotation relate to what you know about me?

ChatGPT:
From what I know about you, you’re someone who appreciates thoughtful, layered analysis and nuanced interpretations. You’ve also worked on and performed acoustic arrangements, often infusing your own perspective into your artistic choices. Given your creative engagement with themes of identity and emotion—such as your work with Dylan’s material, which often involves double entendres and layered meaning—the possibility of sexual connotations in this poem might intrigue you because it adds another dimension to its interpretation.

If this is an aspect you’re considering, it could relate to how you, as a performer and interpreter, might choose to highlight or play down subtle layers of meaning depending on the context or the audience. In this case, the sexual double entendre in “gotten off” could resonate as a parallel to the kind of layered, multifaceted storytelling you admire and engage with artistically.

Do you feel the connotation adds something meaningful to the interpretation of this poem in a way that might influence how you’d present or discuss it?

The keyboard mashing monkey seems to be hitting a lot of the right keys.


Here’s another Q&A I found interesting, and in this case I doubt anyone could say it isn’t specific or useful:

ME:
Based on this poem, what other authors’ works might I be interested in?

The first four recommendations are all authors I’ve previously discussed with ChatGPT, the next six I haven’t but it provides explanations of why I might be interested in each of them, even making suggestions for checking out certain artists works for examples of integrating the words with music - something I have referred to in previous conversations but did not specifically include in this prompt. Again, it’s not random, it’s specifically targeted to me based on the context of information I previously provided.

ChatGPT:
Based on this poem’s tone, themes, and layered meanings—and what I know about your creative interests—you might be drawn to authors and lyricists who explore existential longing, emotional disconnection, identity, and layered storytelling. Here are a few suggestions:


1. Bob Dylan
• As someone who’s already worked with Dylan’s songs, this recommendation reinforces his talent for double entendres, metaphors, and exploring themes of dislocation and identity (e.g., “Shelter from the Storm” or “It’s All Over Now, Baby Blue”). His lyrics often provide rich storytelling with layered meanings, similar to this poem.


2. Leonard Cohen
• Cohen’s poetry and songs (like “Suzanne” or “Famous Blue Raincoat”) delve deeply into themes of longing, loss, and inner conflict. His ability to blend the spiritual and the sensual might resonate with the reflective and emotional tone of this poem.

3. Sylvia Plath
• Plath’s poetry, particularly in Ariel, captures themes of emotional turmoil, identity, and dislocation. Her use of vivid imagery and psychological depth might appeal to you, as her works often carry layered meanings, much like the poem you’re exploring.

4. T.S. Eliot
• Eliot’s modernist works, such as “The Love Song of J. Alfred Prufrock” and “The Waste Land”, share similar themes of alienation and existential confusion. His use of fragmented imagery and metaphors might feel complementary to this poem’s style.


5. Anne Sexton
• Sexton’s confessional poetry often explores personal struggles, identity, and longing with a raw and emotional lens. Her poems frequently delve into layered meanings and complex emotional states, which could resonate with the tone of this piece.


6. Patti Smith
• Smith’s lyricism (as in “Because the Night” or her memoir Just Kids) blends poetic reflection with themes of personal identity, creativity, and longing. Her work often explores the intersection of self-expression and emotional exploration, much like this poem.

7. Rainer Maria Rilke
• Rilke’s poetic works, such as Letters to a Young Poet, explore themes of inner searching, longing, and self-discovery. His reflective and existential tone might align with the emotional depth of this poem.


8. Nick Drake
• Drake’s melancholic, introspective lyrics (e.g., “River Man” or “Pink Moon”) carry a haunting beauty and emotional resonance. If you’re drawn to the idea of turning this poem into lyrics, his style could offer inspiration.

9. Marguerite Duras
• Her prose and fragmented style, particularly in The Lover, explore themes of longing, identity, and emotional disconnect with a poetic sensibility that mirrors the tone of this poem.

10. Richard Siken
• Siken’s Crush explores themes of longing, love, and self-destruction with intense imagery and emotional depth. His ability to convey vulnerability through layered language might feel familiar to the tone of this poem.


Would you like me to recommend specific works from any of these authors, or do these resonate with your current interests?
 
Last edited:
That was a rework of the opening lines of each story in the TV series Person of Interest. I love that series.
Being watched? Always. 😅

Shortly before my dad recently died he called and hassled me about the dangers of the trees leaning over my house - because he could see them on Street View. I’ve heard that insurance companies often do the same. They could probably have AI scan the street view of all of their clients’ houses automatically.

Facial recognition technology can find and track most of us based on the priorities of law enforcement. Networks of traffic cameras and AI monitoring has been used to track down several criminals, including arsonists, kidnappers, and murderers.

Modern military and law enforcement robotics are often designed to operate in several modes - remote control by humans, complete autonomy and different combinations of the two. I wonder what sort of and level of analytics an autonomous weapons system uses to asses and respond to humans and potential threats.
 
I would like more of your poetry though.

Thanks for that.

That is one of three poems I’ve posted here, all written in times of emotional distress.

https://www.literotica.com/authors/AlexBailey/works/poetry

<edited to fix the link>

I’ve written and recorded several songs but I don’t share them here because a couple of them have been covered by better musicians than myself and they could easily be linked back to me in the real world with a search.

FWIW, I don’t use AI for writing help, though I have found it useful for analyzing my finished pieces and for dissecting other works I admire.
 
Last edited:
Regarding the double entendre. I asked it about that wondering if it had avoided the sexual reference. The line is intentionally ambiguous with the expectation that it will resonate with an audience that can appreciate it while being easily ignored or glossed over by someone who could be offended by it - another line to be interpreted by the reader’s mindset.



The AI response shows evidence of more subtlety and consideration of context in its decision making process of how it answers:

If one takes that as a factual account of how it came to this particular response, then it would be evidence, but I wouldn't take it as factual.

To the best of my knowledge, GPT doesn't even keep a record of the internal calculations that led to a specific response; nor is there anything in its programming that would give it the ability to introspect on such a record and translate it into human-friendly words.

What it gives you here is, approximately, an answer to "supposing a human had given such advice, and they were asked why they had given that advice, what would they say?"

Again, it’s not random, it’s specifically targeted to me based on the context of information I previously provided.

These two things aren't mutually exclusive.

If I tell a computer that I like Bach, Dylan, Mötorhead and Stravinsky, and it internally generates a number between 1 and 4 and uses that to select one of those four to play, this is both a random process and one specifically targeted to me based on information I provided.
 
If I tell a computer that I like Bach, Dylan, Mötorhead and Stravinsky, and it internally generates a number between 1 and 4 and uses that to select one of those four to play, this is both a random process and one specifically targeted to me based on information I provided.
And it would be remarkably good at recommending a fifth choice which was none of those, but kinda like all of them. I'll leave you to ponder the l2norm of all those people's embedding vectors.
 
If I tell a computer that I like Bach, Dylan, Mötorhead and Stravinsky, and it internally generates a number between 1 and 4 and uses that to select one of those four to play, this is both a random process and one specifically targeted to me based on information I provided.

Yet you could also remove the random element and ask the AI to choose something to match your mood based on its observations of your day - limited of course by what input it is able to collect.

If, for example, the AI had access to your apps including navigation, your calendar, or weather it could assess your driving and compare it to what you have selected during similar conditions. It could know if you were at your place of work later than usual or if you took your regular route, all of which could be used as indicators. Never mind if it had access to something like a Fitbit that collected real time biometric data. It may get it wrong but you could experiment with it and tailor it to your tastes.

It could be similar to an attentive spouse. It could base the musical decisions on what it knows of your tastes and clues about your mood, incorporate the lighting and time of day, what it knows about your activities and habits. It could automatically pull up your workout music list when you arrive at the gym.

It could gradually improve its responses based on whether or not you let the track play, whether your driving calms down, whether you turn up the volume, etc…

Would that be chaotic and random? Or would it be making its choice based on the closest possible connection to what it assess your mood to be?
 
Last edited:
Yet you could also remove the random element and ask the AI to choose something to match your mood based on its observations of your day - limited of course by what input it is able to collect.

If, for example, the AI had access to your apps including navigation, your calendar, or weather it could assess your driving and compare it to what you have selected during similar conditions. It could know if you were at your place of work later than usual or if you took your regular route, all of which could be used as indicators. Never mind if it had access to something like a Fitbit that collected real time biometric data. It may get it wrong but you could experiment with it and tailor it to your tastes.

It could be similar to an attentive spouse. It could base the musical decisions on what it knows of your tastes and clues about your mood, incorporate the lighting and time of day, what it knows about your activities and habits. It could automatically pull up your workout music list when you arrive at the gym.

It could gradually improve its responses based on whether or not you let the track play, whether your driving calms down, whether you turn up the volume, etc…

You could remove the random element altogether, yes, and many pre-existing machine-learning systems already function along these lines. (Or at least you could remove the type of randomness that I'm focussing on here; there are technical reasons why ML algorithms often involve random number generators even when we don't want randomness in the outputs, but that's another topic again.)

How GPT currently works is something like this:

I enter a prompt e.g. "what is the coat of arms of Wagga Wagga?"

Based on its training data, GPT attempts to figure out what would be a plausible continuation of this text. It has seen many descriptions of coats of arms, so it knows that "coat of arms of ___" is often followed by a response that mentions things like the device on the shield, supporters to either side, some sort of crest above, and some sort of motto below.

It also knows that the words "Wagga Wagga" are close to "Australia", "farming", "Murrumbidgee River", "crows", "sheep", "wheat" etc. because those words often show up together in its training data. Further, "Australia" is close to green and gold (sporting colours), so it can guess that Wagga Wagga might be close to those colours too.

So, simplifying grossly, it gets to the point where it's just written "at the top of the shield is a/an" and then chooses what comes next. Putting all that knowledge together, it comes to something like:

"sheaf of wheat" 120 [or possibly just "sheaf" which is then almost inevitably followed by "of wheat" - I'm not sure whether it chooses the whole block at once]
"crow" 95
"sheep" 85
"ram's head" 83

etc. etc.

where the numbers are scores indicating how well each of those would fit as a continuation of the existing text.

Then it needs to choose between those options. How to do that?

If it went with the highest-scoring option every time (here, "sheaf of wheat"), it'd be deterministic - if you opened a new session with no user history and supplied the same prompt, you could expect to get exactly the same "sheaf of wheat" reply every time.

There are many applications where that would be a fine outcome, but it's likely to start feeling a bit robotic, and one of the biggest selling points of LLMs is their ability to sound human. If you were to use that to write a story, you'd end up with all the characters named things like "John Smith". And if you used it for college essays, it'd be easy to tell who was using LLMs to do their homework.

Another option is to ignore those scores and give each of these possibilities an equal probability of selection. (Maybe restrict it to the top 10 or top 100.) This goes too far the other way: you're just as likely to end up with a purple butterfly on your coat of arms as a sheaf of wheat.

If you want something that sounds like a human, a better option is to go for somewhere in between these two: choose response via some mechanism that gives the higher-scored options a higher chance of being selected, but not certainty. This is governed by a parameter called the "entropy" or sometimes "temperature". The higher it is, the closer it gets to "all options equally likely"; the lower it is, the more likely it is to pick the highest-scoring option.

You can actually play with this in the prompts. Compare:

"write a short retelling of Sleeping Beauty, using entropy parameter 0.01"

Once upon a time, a kingdom celebrated the birth of a princess, Aurora. At her christening, a wicked fairy, Maleficent, cursed her. “Before the sun sets on her sixteenth birthday, she will prick her finger on a spinning wheel and fall into an endless sleep,” she declared.

The king, desperate to protect Aurora, ordered that all spinning wheels be destroyed. Despite this, on her sixteenth birthday, Aurora found an old, hidden wheel. Curious, she pricked her finger and fell into a deep sleep.

The curse was not without hope, however. A good fairy, Flora, had foretold that only true love’s kiss could break the spell. Years passed, and a brave prince ventured to the castle, kissed Aurora, and she awoke.

The curse was broken, and Aurora and the prince lived happily ever after.

[pretty faithful to the Disney version]

"write a short retelling of Sleeping Beauty, using entropy parameter 1000000000"

Once, in a time when time had no shape, a princess was born. Her name, if such a thing could be spoken, was Aurora. Yet names are transient, like echoes collapsing into themselves, infinitely returning without ever truly arriving. The kingdom—if it could be said to exist—was a pulse, a fluctuation in the sea of all potential realities, where cause and effect merged and split like the flicker of a dying star.

A fairy, perhaps called Maleficent, but what is a name when it is swallowed by the weight of a thousand forgotten lifetimes, entered this place, if entering was ever truly possible. She did not curse Aurora with words, for words were thin and fleeting, but instead wove a spiral of inevitability, a fractal of fate, that would be experienced and unexperienced by all who might witness it.

"Before the end of her sixteenth cycle," Maleficent’s presence rippled through the fabric of everything, "she will prick her finger on a spindle—an object that doesn’t exist and yet does—and sleep into a dreamless sleep, or perhaps a dream made of every dream ever dreamed, stretching endlessly into the void."

Time splintered, scattered like shattered glass, each piece reflecting a different version of reality. The king, helpless against such chaos, tried to impose meaning. He destroyed every spindle, every thread of potential, but entropy always finds its balance. It is in the nature of things to collapse into disorder, even when disorder is forbidden.

Sixteen years—could they be years?—unfolded, though not one moment was the same as another. Aurora wandered, or perhaps was wandered, through a labyrinth of shifting spaces, where rooms opened only to collapse, where time unspooled in infinite directions. In a corner that never existed, she found a spindle, though it might have been a shadow of one. Her finger, already pricked before it was pricked, met the point. There was no sleep, only the collapse of everything: a paradox of being and non-being, of action that was both done and never done. She fell, but there was no falling, only a stillness that could never be still.

The castle—if such a thing could be called a castle—paused. It folded in on itself, unfolded, then ceased to be, and yet remained, suspended in the collapse of its own self-awareness. Time existed only in the spaces between moments, where the past and future collided, never meeting, never apart.

Then the prince, though what is a prince when a prince never truly arrives? He appeared not to awaken Aurora, but to unravel her—if unraveling was ever the right word for something so fluid, so formless. His kiss, perhaps a kiss, perhaps a thought, met her lips. The curse—if it had ever been a curse—was neither broken nor fulfilled. It had never been anything but a potential, a wave of possibility that folded back into the sea of potentiality.

And so Aurora, if she was Aurora, did not awaken, nor did she remain asleep. She simply existed in the ripples of a thousand outcomes, all of them equally true and untrue. The prince, if there was a prince, did not live happily ever after, for "ever after" was a construct, a wrinkle in the fabric of a universe where endings and beginnings are never separate, but always intertwined.

They were not happy. They were not sad. They were not anything. They simply were, as all things are, in a place without time, without meaning, and without end, where every moment happens, and none do.

Wheeeeeee!

FWIW, I don't think that "entropy parameter" instruction is being applied in any exact sense under the hood; you probably won't be able to remove randomness completely by specifying entropy = 0. But "using entropy parameter 1000000000" will often result in some very trippy responses.

Would that be chaotic and random? Or would it be making its choice based on the closest possible connection to what it assess your mood to be?

"Chaotic" is not a term I've been using here. It has its own specific technical meaning, and I'd need to look into it further before I'd venture an opinion on whether LLMs fit that description.
 
Footnote, since I hit the character limit:

One thing I noticed when trying out those coat-of-arms questions was that it often included a "golden eagle", with some words about how this symbolises native wildlife. This choice doesn't make a lot of sense in human logic - the golden eagle is not an Australian species. Something like a wedge-tailed eagle would fit better.

But it's explicable in terms of that fuzzy "X is close to Y, Y is close to Z" calculation. Wagga has a strong connection to "crow", hence to "bird", hence to "eagle"; "Wagga" also links to "gold" via "Australia" (and also things like wheat); golden eagles are commonly used as symbols (they're the national bird of Albania, Germany, Austria, Mexico, and Kazakhstan - just not here!). So having written "a golden", "eagle" seems like a plausible next choice.
 
Footnote, since I hit the character limit:

One thing I noticed when trying out those coat-of-arms questions was that it often included a "golden eagle", with some words about how this symbolises native wildlife. This choice doesn't make a lot of sense in human logic - the golden eagle is not an Australian species. Something like a wedge-tailed eagle would fit better.

But it's explicable in terms of that fuzzy "X is close to Y, Y is close to Z" calculation. Wagga has a strong connection to "crow", hence to "bird", hence to "eagle"; "Wagga" also links to "gold" via "Australia" (and also things like wheat); golden eagles are commonly used as symbols (they're the national bird of Albania, Germany, Austria, Mexico, and Kazakhstan - just not here!). So having written "a golden", "eagle" seems like a plausible next choice.
Given that heraldic language conventions are part of the soup, it could also be implying it's the color of a stylized eagle and not the actual species, similar to the various colors of rampant lions and such that often feature in coats of arms. I don't think it's particularly adept at figuring out that aspect of word choices, although to be fair, a lot of actual humans overlook ambiguities like that as well.
 
Footnote, since I hit the character limit:

One thing I noticed when trying out those coat-of-arms questions was that it often included a "golden eagle", with some words about how this symbolises native wildlife. This choice doesn't make a lot of sense in human logic - the golden eagle is not an Australian species. Something like a wedge-tailed eagle would fit better.

But it's explicable in terms of that fuzzy "X is close to Y, Y is close to Z" calculation. Wagga has a strong connection to "crow", hence to "bird", hence to "eagle"; "Wagga" also links to "gold" via "Australia" (and also things like wheat); golden eagles are commonly used as symbols (they're the national bird of Albania, Germany, Austria, Mexico, and Kazakhstan - just not here!). So having written "a golden", "eagle" seems like a plausible next choice.

Should I have bought a lottery ticket? You didn’t address the accurate response I received and posted about the Wagga Wagga coat of arms.

Check it out and explain what happened when it gave the response below. What did it say that is incorrect?

ChatGPT:
The official coat of arms of Wagga Wagga, granted on November 15, 1965, is rich in symbolism reflecting the city’s heritage and industries.

Shield (Arms):

• Field (Background): Green (Vert), representing the fertile land of the region.

• Fess (Horizontal Band): A gold (Or) band across the center, symbolizing prosperity.

• Chief (Upper Section): Eight stalks of wheat, arranged in two sets forming the letter “W,” signifying the city’s initials and its agricultural foundation.

• Base (Lower Section): A Merino ram’s head, representing the significance of sheep farming in the local economy.

• Bar Wavy (Over the Fess): A blue (Azure) wavy line, depicting the Murrumbidgee River that flows through the city.


Crest (Above the Shield):

• Mural Crown: A gold (Or) crown symbolizing municipal authority.

• Caduceus: A gold staff with black (Sable) wings, representing commerce and trade.

• Gum Leaves: Eight leaves of the River Red Gum tree, arranged to form two “W” letters, further emphasizing the city’s initials.

Supporters (Flanking the Shield):

• Crows: Two natural-colored crows, each adorned with a gold collar shaped like the letter “W,” alluding to the interpretation of “Wagga Wagga” as “place of many crows.”

• Compartment (Base): A grassy mound divided by a wavy blue and white line, symbolizing the city’s location on both sides of the Murrumbidgee River.


Motto:

• “Forward in Faith,” reflecting the community’s progressive spirit and confidence in the future.


This coat of arms was designed by H. Ellis Tomlinson, M.A., F.H.S., of Lancashire, England, and officially adopted by the Wagga Wagga City Council in 1965.

The coat of arms is a registered heraldic crest that specifically identifies the City of Wagga Wagga.

The design elements collectively honor the city’s name, its agricultural roots, and its strategic position along the Murrumbidgee River.
 
Should I have bought a lottery ticket? You didn’t address the accurate response I received and posted about the Wagga Wagga coat of arms.

Check it out and explain what happened when it gave the response below. What did it say that is incorrect?
Isn't that just a google search, that found Wikipedia? How is this evidence of anything much? It's copying content.

https://waggapedia.fandom.com/wiki/Coat_of_arms_of_Wagga_Wagga

Why go to the bother of writing prompts when you can google search "Wagga Wagga coat of arms" - far quicker and you can see at a glance a bunch of images which are all the same. Whereas, as Bramble demonstrated up above, ChatGPT came up with a bunch of wrong stuff, several times. What use is a tool that does that?
 
Isn't that just a google search, that found Wikipedia? How is this evidence of anything much? It's copying content.

https://waggapedia.fandom.com/wiki/Coat_of_arms_of_Wagga_Wagga

Why go to the bother of writing prompts when you can google search "Wagga Wagga coat of arms" - far quicker and you can see at a glance a bunch of images which are all the same. Whereas, as Bramble demonstrated up above, ChatGPT came up with a bunch of wrong stuff, several times. What use is a tool that does that?
Same prompt, Copilot gave the correct description and referred to WW council site and wiki-heraldry. When a workman consistently screws up and blames his tools, do think it's the tools that are the screw-up, or the workman?
 
I am amazed how many thoughtful reponses and debate this thread has generated. However my original question has not been answered. Can AI analyze a text and provide suggestions that help me become a better writer.
I’ll try answering your original question.

Yes, AI can analyze text and provide meaningful suggestions. Yes, AI can help you become a better writer.

Noting that you are the writer, so you must become better.

ChatGPT and similar can prompt you with things to consider along your path of self-improvement, but you must accept, or reject, or reinterpret those prompts yourself. AI may even lead you astray, if you don’t understand its limitations – like those (multiple) people who drove their car into water because their ‘artificially intelligent’ GPS navigator told them a ferry departs from that boat ramp.

The sentience, IQ, or empathy of the AI that regurgitates text at you is immaterial to your own consideration of your own writing. As long as it makes you think, then it is useful.

Ray Bradbury’s ‘Zen in the Art of Writing’ could also help you, as could the numerous generic ‘writing tips’ you’ll find online from Stephen King, et al., None of those writers have ever read your work, but you still might find useful nuggets of wisdom you can apply through introspection.

The best prompts are those that make you see your work in a new light, or from a different perspective, especially when you haven’t considered that viewpoint before.

When we become Top Gun Authors, generic writing tips may no longer interest us. Until then, keep an open mind. (E.g. The ‘short review’ in your opening post – lacks emotional depth, clunky language, poorly develops characters, rushes through clichéd scenarios – had me considering how those comments might apply to my own work, despite their origin and relevance.)
 
Same prompt, Copilot gave the correct description and referred to WW council site and wiki-heraldry. When a workman consistently screws up and blames his tools, do think it's the tools that are the screw-up, or the workman?
So it's a search engine, is that what you're now saying? What's your response to Bramblethorn's evidence several pages back, whether they got several wrong answers in a row?
 
So it's a search engine, is that what you're now saying? What's your response to Bramblethorn's evidence several pages back, whether they got several wrong answers in a row?
I said what I said. Read it a few times, let it sink it.

PS: If you'd been paying 'attention' (geddit) you would have noted that many OS developers are integrating AI into their products and promoting them as a super search engines.
 
@Cacatua_Galerita Thank you for getting back to the original question, even though I love how many aspects this thread has brought forward.

I had hoped to get into a discussion about how much AI can do as a writing tutor.
Initially I thought it could be useful - craft good prompts, and you get helpful answers.

I was surprised how much people speculated just instead of experimenting with their own texts or prompts like @AlexBailey did.

To recap, what I remember of these is:
- AI gave the same answer to two different poems by two different authors.
- Asked to suggest an improvement, it created a horrible poem in kitschy purple lyrics. (https://forum.literotica.com/thread...-to-improve-my-writing.1624529/post-100184980)
- It can emulate different writing styles.

My own experiments with having it actually write were terrible. Always the same images, always the same phrases. (Bustling cities, lush gardens , cosy rooms, all testament to something.)
I stopped as soon as I realized, because this slimy style creeps into your head and poisons your own style.
Having it emulate a certain style (I chose Hemingway because that is unique and I have read it myself)
It did give something "Not entirely unlike Hemingway". (Maybe you recognize the quote)

But having it do so a second time resulted in an extremely similar piece.
Which is understandable, AI searches for the highest probability pattern match. And mathematically, there can only be one. And it hands out that exact pattern over and over again. (As with the poems)

So the next thing I hoped for was that it would be able to combine and tailor its knowledge about writing tips according to my texts.
That helped somewhat.

Of course I had read the generic writing tips, talking about show-dont-tell, pacing, natural flow. But I found it hard to apply them to my own writing. (Is this new section I wrote well-paced now or not? etc)
So having the AI tell me where exactly pacing was off helped me. But it was in no way as helpful as the human suggestions. I got from @Rob_Royale. @nice90sguy (https://forum.literotica.com/thread...-to-improve-my-writing.1624529/post-100151168)

So my take is: If you are on to your second story because the first one got bad marks, then it might be helpful to get you off the ground. Then you have to be very specific, which means writing good prompts is just as challenging as writing good prose.
 
Given that heraldic language conventions are part of the soup, it could also be implying it's the color of a stylized eagle and not the actual species,

Or both.* It's easy to slip into anthropomorphising and think of it as having a specific one of those two in mind and trying to express that answer, but in reality with a LLM there's no "in mind"; all it knows here is that "golden" followed by "eagle" seem like good words.

*Heralds will see what I did there
 
Back
Top