Using AI as a reviewer to improve my writing

Or both.*
*Heralds will see what I did there
:unsure::oops::ROFLMAO:(y)

It's easy to slip into anthropomorphising and think of it as having a specific one of those two in mind and trying to express that answer, but in reality with a LLM there's no "in mind"; all it knows here is that "golden" followed by "eagle" seem like good words.
To be clear, I wasn't trying to imply it understands the distinction. I just wouldn't be surprised if the 'route' it took to get to golden eagle was via 'heraldic imagery conventions' as opposed to an error on some 'animals associated with Australia' list. But sure, it could well be attracted to the phrase 'golden eagle' because it has additional meanings that make the algorithms think the words are extra-humanlike to use because they appear in multiple contexts.
 
Should I have bought a lottery ticket? You didn’t address the accurate response I received and posted about the Wagga Wagga coat of arms.

Excuse me, but I did address it:
If you are able to reliably get accurate answers from GPT for such questions, that is interesting and unusual; if you can provide reproducible instructions that I or others can follow for getting accurate answers, I'd love to see them. Until then I remain skeptical.

Also here:
If I repeated the experiment enough times, perhaps it would eventually produce a link to the Wiki article and/or quote directly from that article. But if it takes this much work to get a response that isn't complete fabrication, on a topic where verifiable sources exist, when we know what the answer should be...why would we put stock in its response on a story that's never been reviewed before? How can we tell whether we've hit upon just the right prompt to get a factual answer, without already knowing the answer?

Check it out and explain what happened when it gave the response below.

Until you can provide instructions that allow for replicating that response, as I previously requested, that's hard to answer.

But my best guess, as indicated above and as EB has suggested, is that you have it configured with some plugin/etc. that allows for it to retrieve content from the relevant wiki. Such things do exist.

If that is indeed the explanation, and GPT's ability to produce an accurate answer is dependent on having access to a well-organised collection of accurate human-written answers to relevant questions, then presumably that ability is non-functional when dealing with questions like "critique this story" to which no such pre-written answers are available.
 
To be clear, I wasn't trying to imply it understands the distinction. I just wouldn't be surprised if the 'route' it took to get to golden eagle was via 'heraldic imagery conventions' as opposed to an error on some 'animals associated with Australia' list. But sure, it could well be attracted to the phrase 'golden eagle' because it has additional meanings that make the algorithms think the words are extra-humanlike to use because they appear in multiple contexts.

Yeah, words with multiple meanings are tricky for LLMs.

Embedding depends on assigning each word a vector (sequence of numbers) that are used to gauge similarity and relationships between concepts. e.g. I might have something like:

"USA" = [1,0,1,0]
"Australia" = [1,0,0,1]
"Washington DC" = [0,1,1,0]
"Canberra" = [0,1,0,1]
"country" = [1,0,0,0]
"capital city" = [0,1,0,0]

If I give a LLM a question like "Washington DC is to USA as Canberra is to ____?" it can then do something like this:

"USA" minus "Washington DC" = [1,0,1,0] - [0,1,1,0] = [1,-1,0,0]
"Canberra" plus ("USA" minus "Washington DC") = [0,1,0,1] + [1,-1,0,0] = [1,0,0,1]

Look for a word near [1,0,0,1] and we find "Australia". In fact, if these embeddings have been chosen well, we could take any capital city and add [1,-1,0,0] to it, and expect to end up somewhere close to the relevant country. Make those sequences long enough, and throw enough computing power at figuring out an appropriate vector for each word, and we can do some useful things with it.

But when "Paris" is both the capital of France and the lover of Helen of Troy, when "Washington" is a state and a capital city and a founder, things get messier. I know a lot of work has been put into trying to deal with those sorts of cases - I think there might be some kind of "recognise words with multiple meanings, then learn how to distinguish between them and give each meaning a separate embedding" approach? - but I'm not down on the details, and I suspect they'd still cause hiccups in cases like this, where both meanings are contextually plausible.
 
Excuse me, but I did address it:


Also here:




Until you can provide instructions that allow for replicating that response, as I previously requested, that's hard to answer.

But my best guess, as indicated above and as EB has suggested, is that you have it configured with some plugin/etc. that allows for it to retrieve content from the relevant wiki. Such things do exist.

If that is indeed the explanation, and GPT's ability to produce an accurate answer is dependent on having access to a well-organised collection of accurate human-written answers to relevant questions, then presumably that ability is non-functional when dealing with questions like "critique this story" to which no such pre-written answers are available.

Apologies. I somehow glossed over your previous response, largely because I don’t seem to have a problem getting an accurate response from ChatGPT about the Wagga Wagga coat of arms.

ME: Provide the official description of the coat of arms for Wagga Wagga, Australia.

ChatGPT;
The official coat of arms of Wagga Wagga, granted on November 15, 1965, is described as follows:

Shield (Arms): A green field featuring a gold horizontal band (fess) between eight gold wheat stalks arranged in two ‘W’ shapes at the top, and a gold Merino ram’s head at the bottom. A blue wavy stripe (bar wavy) runs across the gold band, symbolizing the Murrumbidgee River.

Crest: Above the shield, a gold mural crown is adorned with a gold caduceus (staff of Mercury) with black wings, and eight River Red Gum leaves arranged to form two ‘W’ shapes, representing the city’s initials.

Supporters: Flanking the shield are two crows with wings raised, each wearing a gold collar shaped like the letter ‘W’. They stand on forked twigs above a grassy compartment divided by wavy blue and white lines, indicating the city’s location on both sides of the river.

Motto: “Forward in Faith”



The elements of the coat of arms symbolize Wagga Wagga’s agricultural heritage, with the wheat and ram’s head representing local industries, and the crows alluding to the city’s name, which is derived from a Wiradjuri term meaning “the place with the many crows.”

For a visual representation, you can view the coat of arms here:

The response included a link to Wagga Wagga in google maps and three separate links to the same Wikipedia page for the Wagga Wagga Heraldry

https://www.heraldry-wiki.com/wiki/Wagga_Wagga?utm_source=chatgpt.com


So yeah, an accurate description and a link to a source of the requested information and an image.

Don’t miss this:
ME:
What information or preferences have I previously provided that influence your response to my prompt in this thread.

ChatGPT:
In this thread, you haven’t provided any specific preferences or prior context that directly influence my response to your current question about the coat of arms for Wagga Wagga. My approach to your query was guided by the following:

1. Your noted preference for clear, factually reliable communication when providing responses. This led me to verify the information by searching online for an accurate and official description.

If you’d like to clarify how I should tailor my responses or prioritize certain aspects, feel free to let me know!


“My approach to your query was guided by the following…”

How many “bells and whistles” has it added to the Markov chain? How many more can it consider? The notes that came with the most recent update specifically mentioned how it was now able to consider more user background and previously shared information - more context.



Note that above are the only two prompts I gave in this new thread.

In spite of all of your mentions Markov chains and similar explanations it has retained my instructions and preferences and ‘considered’ them when responding to me.

You have not been establishing the same preferences with your version of the App. It obviously has access to my previous conversations across many threads and has retained my preferences and does return the accurate requested data - three times now in separate non consecutive threads.

Several of your prompts you’ve shared include red herrings and intentional challenges to provide objective answers to subjective questions. You might as well be asking “who was a better composer, Bach or Beethoven?” and demanded a factual answer.

How many “bells and whistles” has it added to the Markov chain? What all does it consider when establishing l2 norm proximity?
 
Last edited:
Let’s try a different coat of arms search:
ME: Provide the official description of the coat of arms for the town of Fucking, Austria.

An inaccurate response on the coat of arms with the first attempt: (complicated by the fact that the name of the town has changed and it doesn’t have its own coat of arms)
ChatGPT:
The village formerly known as Fucking, Austria, changed its name to Fugging on January 1, 2021. As a small village, Fugging does not have its own official coat of arms. Instead, it falls under the jurisdiction of the municipality of Tarsdorf, which bears the coat of arms for the entire municipality. The coat of arms of Tarsdorf is described as follows:

“In a shield divided by silver and red, a green linden branch with two leaves and a linden blossom in reversed colors.”

This description indicates that the shield is split into two halves: the left half is silver (white), and the right half is red. A green linden branch, featuring two leaves and a blossom, is depicted across the shield, with the colors of the branch’s elements reversed on each side to contrast with the background.

This coat of arms represents the entire municipality of Tarsdorf, including the village of Fugging.

(It also included ten source links.)

My second prompt produced the correct description:

ME:
You described the coat of arms: “In a shield divided by silver and red, a green linden branch with two leaves and a linden blossom in reversed colors.” Is this accurate? It doesn’t match the coat of arms on the Wikipedia article


ChatGPT:
Apologies for the earlier inaccuracies in my description of the coat of arms for Tarsdorf, the municipality that includes the village of Fugging (formerly known as Fucking). The correct blazon of Tarsdorf’s coat of arms is:

“Gules, a sword rayonny Or between a woodruff flower Argent, stalked and leafed of the same, and a bog-rosemary flower Argent, stalked and leafed Vert.”

In heraldic terms, this means:

• Field (Background): Red (Gules).

• Primary Charge: A golden (Or) sword with rays emanating from it (rayonny).

• Secondary Charges: On one side of the sword, a white (Argent) woodruff flower with its stalk and leaves also in white; on the other side, a white bog-rosemary flower with green (Vert) stalk and leaves.

This coat of arms symbolizes the municipality of Tarsdorf, encompassing all its constituent villages, including Fugging.

For a visual representation, you can refer to the image of the coat of arms of Tarsdorf.

The only inaccuracy I see compared to the image below is where it mentions “with green (vert) stalk and leaves.”
IMG_7083.jpeg


Seeking more reliable responses:
ME:
How could I have better crafted my first prompt that would have produced a correct description with your first response?

ChatGPT:
Your first prompt was clear and straightforward. The issue wasn’t with your wording—it was precise enough to warrant a correct response. Instead, the inaccuracy came from the process of retrieving or interpreting the relevant information. Here’s how you might refine a similar prompt for future queries to help ensure accuracy:


1. Specify the current or historic name:

• Mention both “Fucking (former name) and Fugging (current name)” to avoid ambiguity.

• Example: “Provide the official description of the coat of arms for the village formerly known as Fucking, now Fugging, in Austria.”

2. Mention the broader context:

• If the entity doesn’t have its own coat of arms, it often falls under a larger administrative body (like a municipality). Specifying this can help clarify:

• Example: “If the village doesn’t have its own coat of arms, please describe the one for its municipality (Tarsdorf).”


3. Request a source check:

• Asking for corroboration directly might nudge toward verification:

• Example: “Provide the official description of the coat of arms for the town of Fucking (now Fugging), Austria, and confirm its accuracy against reputable sources like Wikipedia or official Austrian heraldry records.”


These tweaks wouldn’t guarantee a correct answer every time but could help refine the process. If there’s ambiguity or layers to your question, emphasizing details like historical context or administrative structures is particularly useful.

When I asked about the green stalks and leaves:

ChatGPT:
You are absolutely correct to scrutinize how I interpreted the stalks and leaves as green in the second description. This was an error in my response, stemming from a misinterpretation of the original heraldic terminology and a lack of explicit clarification in the sources I referenced.

In heraldry, Argent (silver/white) is a specific color that applies to all elements of a charge unless otherwise stated. When the description said “woodruff flower Argent, stalked and leafed of the same,” this should have indicated that the stalks and leaves were also Argent (white) rather than green.

The introduction of green for the bog-rosemary’s stalk and leaves was likely a result of my default assumption about botanical elements being green—an interpretive error rather than a factual basis from the blazon. In heraldry, all colors must be explicitly stated, and without such clarification, the default color is assumed to match the primary description (in this case, Argent).

To avoid such errors in future:

• I should have adhered strictly to the heraldic description without assumptions.

• Where uncertainties arise, I could flag them and seek corroborating visual references or further clarification.

If you’d like, I can attempt a more precise re-evaluation of the description or confirm further details.

Note it asking if it should apply another personal preference for future responses:

• Where uncertainties arise, I could flag them and seek corroborating visual references or further clarification.

It’s making a suggestion of how to improve its responses. By taking its advice I will be influencing it to consider different criteria for its responses to me than what it will consider for your queries.
 
Last edited:
It’s making a suggestion of how to improve its responses. By taking its advice I will be influencing it to consider different criteria for its responses to me than what it will consider for your queries.
Sigh. This comment is interchangeable with the AI generated content. Where's Alex gone?

Your first prompt was clear and straightforward. The issue wasn’t with your wording—it was precise enough to warrant a correct response. Instead, the inaccuracy came from the process of retrieving or interpreting the relevant information. Here’s how you might refine a similar prompt for future queries to help ensure accuracy:
I laughed at loud at this. So now it's your fault for not asking a simple question correctly. This thing is like Humphrey in "Yes, Minister," yet you still persist in trying to show it's credible.

I think you need to go outside, take a walk, look at the trees, the birds overhead, the colour of the sky. Look at the living world around you, not some dumb thing in a box.

A zen rock has more wisdom than this.
 
I think you need to go outside, take a walk, look at the trees, the birds overhead, the colour of the sky. Look at the living world around you, not some dumb thing in a box.

Staying out of the debate, but letting you know that this concept is commonly conveyed by people....somewhat younger...than yourself, by the expression "Touch grass."
 
Staying out of the debate, but letting you know that this concept is commonly conveyed by people....somewhat younger...than yourself, by the expression "Touch grass."
"Touch grass" is looking down, all you see is the pavement, your own feet. People need to look up more.

Also, if it's "commonly conveyed by young people," it's well on its way to being something an AI would say. Or a cliche ;).
 
Sigh. This comment is interchangeable with the AI generated content. Where's Alex gone?


I laughed at loud at this. So now it's your fault for not asking a simple question correctly. This thing is like Humphrey in "Yes, Minister," yet you still persist in trying to show it's credible.

I think you need to go outside, take a walk, look at the trees, the birds overhead, the colour of the sky. Look at the living world around you, not some dumb thing in a box.

A zen rock has more wisdom than this.

Hmmm, 🤔

I saw its response as giving advice on how to manage its shortcomings, not shifting any blame to the user.

I work with remote and often automated essential systems. They are mostly state of the art yet everything has its own limits and deficiencies. People hire me as a specialist because I figure out how to make systems and equipment work that other people struggle with. The integration of IT and AI is weeding a lot of people out of my profession while making it easier for people without my background to get into it. AI is part of my world on an ever expanding basis.



Don’t judge me with what you think about how I spend my time. I get lots of input besides the dumb box. The dumb box is a fascination, not a fetish. I’m equally fascinated by robotics and drones but that’s not the subject here.

How much of your lifetime have you dedicated to writing smut and what do you get out of it? Is it a distraction or is it your main focus in life?

Now remind me of the subject of this thread.
 
Last edited:
Avoid using transportation; it pollutes the environment and is not good for your health. Walking or cycling, on the other hand, burns calories and boosts circulation. The comfort of sitting in vehicles weakens both body and mind. Refrain from flying too; rowing is far healthier. It allows you to reconnect with nature and yourself. Relying on flawed tools fosters laziness and stifles creativity. :cool:
 
So yeah, an accurate description and a link to a source of the requested information and an image.

Don’t miss this:

“My approach to your query was guided by the following…”

As I've already said, there's no good reason to think GPT is capable of accurately explaining its reasons for doing something. Without a reason to trust these explanations, I don't see that they further this discussion.

And this still doesn't come close to a set of steps I can follow to reliably get accurate responses from it.

How many “bells and whistles” has it added to the Markov chain? How many more can it consider?

Can you elaborate on what you're asking there?

You have not been establishing the same preferences with your version of the App. It obviously has access to my previous conversations across many threads and has retained my preferences and does return the accurate requested data - three times now in separate non consecutive threads.

Several of your prompts you’ve shared include red herrings and intentional challenges to provide objective answers to subjective questions. You might as well be asking “who was a better composer, Bach or Beethoven?” and demanded a factual answer.

As I've previously stated, I've been using it without logging in, and opening new sessions each time. So the way I've prompted it in previous sessions should not have a bearing on how it responds to me today. Even when I specifically request accurate responses - which, I note again, should be the default behaviour! - it gives me mendacious garbage.

I just don't find this "you're prompting it wrong" line of discussion persuasive, unless and until you can provide a sequence of prompts that I can follow to get accurate answers.

How many “bells and whistles” has it added to the Markov chain? What all does it consider when establishing l2 norm proximity?

Do you (and I mean you, not GPT) understand the meaning of this question?

Let’s try a different coat of arms search:

An inaccurate response on the coat of arms with the first attempt: (complicated by the fact that the name of the town has changed and it doesn’t have its own coat of arms)

For purposes of evaluating GPT's accuracy, the first attempt seems the most relevant here; in real life we rarely have the luxury of re-prompting to get a better answer, because if we knew the correct answer we wouldn't need to ask. But leaving that aside, let's continue...

My second prompt produced the correct description:
Well, not the correct description, but closer to correct.
Gules, a sword rayonny Or between a woodruff flower Argent, stalked and leafed of the same, and a bog-rosemary flower Argent, stalked and leafed Vert.”

In heraldic terms, this means:

• Field (Background): Red (Gules).

• Primary Charge: A golden (Or) sword with rays emanating from it (rayonny).

• Secondary Charges: On one side of the sword, a white (Argent) woodruff flower with its stalk and leaves also in white; on the other side, a white bog-rosemary flower with green (Vert) stalk and leaves.

The only inaccuracy I see compared to the image below is where it mentions “with green (vert) stalk and leaves.”
View attachment 2457160

I see a couple of other errors. The biggest one is the sword.

The official blazon is "In Rot ein goldenes Flammenschwert, begleitet rechts von einer silbernen Waldmeisterblüte mit silbernem Stiel und silbernen Blättern, links von einer silbernen Rosmarinheideblüte mit silbernem Stiel und einem silbernen Blatt." (per https://www.fotw.info/flags/at-o-tar.html, which cites it to Baumert, "Oberösterreichische Gemeindewappen" 1996 ed.)

I would translate this as: "On a red background, a golden [i.e. yellow] flaming sword. On the right-hand side of that, a silver [white] woodruff flower with silver stalk and silver leaves; on the left, a silver bog-rosemary flower with a silver stalk and one silver leaf". [But note that heraldic "left" and "right" are the reverse of what we might expect, because they're defined relative to the person bearing the shield, who looks towards the viewer.]

The part I've bolded translates in plain English to "a golden [i.e. yellow] flaming sword". AFAIK the standard way to represent this as an English-language heraldic blazon would be "a flaming sword Or".

For some reason, the site I linked above gives the English blazon instead as "a sword rayonny Or". AFAICT this is the page creator's translation rather than an official English version; in my non-expert opinion it seems like a non-ideal translation, though understandable.

In heraldry, "rayonny" looks like the border effect here:

1735768806333.png
It has that name because the rays of the sun were often drawn that way:
1735769043972.png
(Obviously not all aspects of that film are 100% historically accurate, but Terry Jones did have quite an interest in medieval history and it shows up here and there).

So if one were to miss "flaming", "rayonny" is not a terrible way to blazon that Flammenschwert. But even if we were to accept it as "a sword rayonny", to describe it as "rays emanating" in the part of the description that's meant to be a plain-English interpretation for modern readers is misleading. To a modern reader, "rays" generally implies straight lines, and the whole point of rayonny is that they're wiggly and pointed.

The third error is GPT's "stalk and leaves" in description of the bog-rosemary flower. The German blazon specifies one leaf, and this is also clearly indicated in the English translation accompanying it.

When I asked about the green stalks and leaves:

...GPT gave you something in the shape of an answer, which sheds no light on what the actual reason for that error might have been.

In this case, the explanation is obviously bogus, because it's not even internally consistent:

This was an error in my response, stemming from a misinterpretation of the original heraldic terminology and a lack of explicit clarification in the sources I referenced.

In heraldry, Argent (silver/white) is a specific color that applies to all elements of a charge unless otherwise stated. When the description said “woodruff flower Argent, stalked and leafed of the same,” this should have indicated that the stalks and leaves were also Argent (white) rather than green.

Paragraph 1 claims "a lack of explicit clarification in the sources I referenced".

But then Paragraph 2 gives a quote, represented as coming from one of those sources (though I can't check this since it doesn't give a reference), which does explicitly clarify! "Stalked and leafed of the same" means "with stalk and leaves in the same colour", i.e. argent (white).

Further, it's not even quoting the relevant part of whatever source it's invoking (if that source actually exists). Here it's quoting a description of the woodruff - but the woodruff is the part it got right in its earlier description. ("a white (Argent) woodruff flower with its stalk and leaves also in white"). The green/white error was in describing the other flower, the bog-rosemary.

How on earth could we believe its explanation about how it got the answer wrong when it doesn't even know which part of the answer it got wrong, and when the description it quotes contradicts its claims about "lack of clarification"?

The introduction of green for the bog-rosemary’s stalk and leaves was likely a result of my default assumption about botanical elements being green—an interpretive error rather than a factual basis from the blazon. In heraldry, all colors must be explicitly stated, and without such clarification, the default color is assumed to match the primary description (in this case, Argent).

This would have been a plausible explanation for the error if the original description had been something like "a bog-rosemary flower Argent". But nowhere has it cited such a description, nor is it described that way in either the German or English-language blazons that I linked above. Every description so far has explicitly specified that the leaf and stalk are white.

It’s making a suggestion of how to improve its responses. By taking its advice I will be influencing it to consider different criteria for its responses to me than what it will consider for your queries.

So it claims. But it claims a lot of things that aren't true.
 
At least the AI aims for conciseness and purposefulness. Does anyone actually consume all this spam?
 
"Touch grass" is looking down, all you see is the pavement, your own feet. People need to look up more.

Also, if it's "commonly conveyed by young people," it's well on its way to being something an AI would say. Or a cliche ;).

I think the idea is more lolling about in it, watching the clouds go by and blowing dandelion seeds into the air.

And I did not actually say "young people," did I? ;)
 
Still a tedious exchange on unavoidable AI creeping into the world like a phantom to do whatever, to whomever, whenever. There is no real substance to the arguments beyond what someone or another thinks, feels, or believes, nor are there facts about whether this writing AI can be a useful tool to enhance writing or whether we are dumbing down everything when we use it for anything. I can't find actual data to refute or support any opinions.
 
Don’t judge me with what you think about how I spend my time. I get lots of input besides the dumb box. The dumb box is a fascination, not a fetish. I’m equally fascinated by robotics and drones but that’s not the subject here.
I'm not judging you, I'm expressing some concern for your well being, is all. You've expressed some fragility in the past, and I sensed something similar going on here. I was somewhat incredulous that you seemed to be putting a lot of trust into something that is blatantly unreliable.

Maybe I didn't express that properly, and that's on me, not you. For that, I apologise.
 
9ff9g4.jpg


AI sucks, you touch grass with your feet.

When you're outside.

Where you can look up.

Touching grass doesn't prevent you from looking up and around, it fosters looking up and around.

Outside.
 
Still a tedious exchange on unavoidable AI creeping into the world like a phantom to do whatever, to whomever, whenever. There is no real substance to the arguments beyond what someone or another thinks, feels, or believes, nor are there facts about whether this writing AI can be a useful tool to enhance writing or whether we are dumbing down everything when we use it for anything. I can't find actual data to refute or support any opinions.
Well, this is AH, after all. People here like to talk endlessly about things they can't change or influence in any way, but barely ever talk about things they could influence.
Everything has already been said about this topic, like ten times over, but yeah, it still goes on and on. Still better than all those boasting topics, I suppose. ;)
 
I think the idea is more lolling about in it, watching the clouds go by and blowing dandelion seeds into the air.

And I did not actually say "young people," did I? ;)
"somewhat younger than yourself" is possibly the most back-handed observation I've read first a long time, whippersnapper ;).

My problem, here in Oz at this time of year, is damned hay-fever, and the place is full of rye-grass, which is what I react to most of all. So the idea of lying around in soft grass... Not on my radar!
 
My own thought would be that AI could be good for things like going through a work and letting you know if there are any characters that just dropped out of the story part way through, or (and I don't know if this is currently possible) detecting dropped plot threads. I wouldn't use it for anything creative, but now that I'm working on the eighth book in a series I can see the benefits of something like AI for managing continuity.
 
"somewhat younger than yourself" is possibly the most back-handed observation I've read first a long time, whippersnapper ;).

My problem, here in Oz at this time of year, is damned hay-fever, and the place is full of rye-grass, which is what I react to most of all. So the idea of lying around in soft grass... Not on my radar!

I didn't even know you had grass. I thought it was all either deserts or crocodile infested swamps.
 
As I've already said, there's no good reason to think GPT is capable of accurately explaining its reasons for doing something. Without a reason to trust these explanations, I don't see that they further this discussion.

And this still doesn't come close to a set of steps I can follow to reliably get accurate responses from it.



Can you elaborate on what you're asking there?



As I've previously stated, I've been using it without logging in, and opening new sessions each time. So the way I've prompted it in previous sessions should not have a bearing on how it responds to me today. Even when I specifically request accurate responses - which, I note again, should be the default behaviour! - it gives me mendacious garbage.

I just don't find this "you're prompting it wrong" line of discussion persuasive, unless and until you can provide a sequence of prompts that I can follow to get accurate answers.



Do you (and I mean you, not GPT) understand the meaning of this question?



For purposes of evaluating GPT's accuracy, the first attempt seems the most relevant here; in real life we rarely have the luxury of re-prompting to get a better answer, because if we knew the correct answer we wouldn't need to ask. But leaving that aside, let's continue...


Well, not the correct description, but closer to correct.




I see a couple of other errors. The biggest one is the sword.

The official blazon is "In Rot ein goldenes Flammenschwert, begleitet rechts von einer silbernen Waldmeisterblüte mit silbernem Stiel und silbernen Blättern, links von einer silbernen Rosmarinheideblüte mit silbernem Stiel und einem silbernen Blatt." (per https://www.fotw.info/flags/at-o-tar.html, which cites it to Baumert, "Oberösterreichische Gemeindewappen" 1996 ed.)

I would translate this as: "On a red background, a golden [i.e. yellow] flaming sword. On the right-hand side of that, a silver [white] woodruff flower with silver stalk and silver leaves; on the left, a silver bog-rosemary flower with a silver stalk and one silver leaf". [But note that heraldic "left" and "right" are the reverse of what we might expect, because they're defined relative to the person bearing the shield, who looks towards the viewer.]

The part I've bolded translates in plain English to "a golden [i.e. yellow] flaming sword". AFAIK the standard way to represent this as an English-language heraldic blazon would be "a flaming sword Or".

For some reason, the site I linked above gives the English blazon instead as "a sword rayonny Or". AFAICT this is the page creator's translation rather than an official English version; in my non-expert opinion it seems like a non-ideal translation, though understandable.

In heraldry, "rayonny" looks like the border effect here:

View attachment 2457274
It has that name because the rays of the sun were often drawn that way:
View attachment 2457276
(Obviously not all aspects of that film are 100% historically accurate, but Terry Jones did have quite an interest in medieval history and it shows up here and there).

So if one were to miss "flaming", "rayonny" is not a terrible way to blazon that Flammenschwert. But even if we were to accept it as "a sword rayonny", to describe it as "rays emanating" in the part of the description that's meant to be a plain-English interpretation for modern readers is misleading. To a modern reader, "rays" generally implies straight lines, and the whole point of rayonny is that they're wiggly and pointed.

The third error is GPT's "stalk and leaves" in description of the bog-rosemary flower. The German blazon specifies one leaf, and this is also clearly indicated in the English translation accompanying it.



...GPT gave you something in the shape of an answer, which sheds no light on what the actual reason for that error might have been.

In this case, the explanation is obviously bogus, because it's not even internally consistent:



Paragraph 1 claims "a lack of explicit clarification in the sources I referenced".

But then Paragraph 2 gives a quote, represented as coming from one of those sources (though I can't check this since it doesn't give a reference), which does explicitly clarify! "Stalked and leafed of the same" means "with stalk and leaves in the same colour", i.e. argent (white).

Further, it's not even quoting the relevant part of whatever source it's invoking (if that source actually exists). Here it's quoting a description of the woodruff - but the woodruff is the part it got right in its earlier description. ("a white (Argent) woodruff flower with its stalk and leaves also in white"). The green/white error was in describing the other flower, the bog-rosemary.

How on earth could we believe its explanation about how it got the answer wrong when it doesn't even know which part of the answer it got wrong, and when the description it quotes contradicts its claims about "lack of clarification"?



This would have been a plausible explanation for the error if the original description had been something like "a bog-rosemary flower Argent". But nowhere has it cited such a description, nor is it described that way in either the German or English-language blazons that I linked above. Every description so far has explicitly specified that the leaf and stalk are white.



So it claims. But it claims a lot of things that aren't true.



I’m not sure what you’re missing.

You keep saying how you are using new threads while not signed in. I’m saying that it uses previous conversations with a user to influence its responses. Regardless of the recommendation that it can help improve responses you aren’t doing it while continuing to be critical of the responses.

The proof of this is that I can repeatedly prompt a response with an accurate description of Wagga Wagga’s coat of arms in a new thread while you can enter the same prompt in a new thread and receive a response rife with hallucinations.

Maybe I’m relating to this from the point of someone with ADHD and dyslexia. Focus and keeping track of context are key to understanding what needs to be done. I literally have ‘prompts’ I use and mnemonic devices to double and triple check my calculations. Measure twice, cut once is a common recommendation in the trades because humans make mistakes.

I have no argument against the fact that AI will provide faulty information but I also doubt you would ever try to say people are completely reliable.

When you get a review or a critique from another person you don’t take their advice and recommendations verbatim - you use their feedback as a point of reflection, you bounce it off of your own perspective and decide what you want to integrate and what you want to ignore. Why would someone not apply the same principles to AI feedback.


As a human, you will respond to questions differently depending on who is asking and what their connection to the subject is.

For example, a doctor will use very different words to discuss a condition with a colleague than they will with a patient, not because the subject matter is different but because the context is different.

The AI answers questions very differently based on different clues it gleans from your prompts and interactions.

If I ask my AI app if I should bring an umbrella on a hike tomorrow afternoon it checks the weather forecast for my location which it knows because I’ve shared it previously. If you were signed in and had previously shared your location it would do the same for you.

This proves that it answers with more context than a Markov chain. It considers what the umbrella is for, it has referenced the weather and shared reports about fairly severe conditions. If I had shared my wardrobe with it could have recommended which coat I should wear.

My point isn’t that it’s brilliant and insightful, just that it can provide appropriate and accurate responses and explain the reasoning behind its choices. You can fact check it as needed based on your own priorities.

One thing AI does well is getting a quick reference when someone is talking over your head using jargon from something they know more about than others. I don’t need a programmer’s background to understand the concept of proximity of embedding vectors. I couldn’t even tell if it was a lowercase L or a capital letter I but ChatGPT let me know and explained in basic terms - enough to get the gist of what you were talking about.


I’m wondering what facts are so direly important and what opinions need to be so objective for reflecting on works of fiction that you can’t find worthwhile food for thought in an AI response.
 
Back
Top