AI gets confused over simple edits

I'm sorry that's outside the scope of my automated scraping of the internet. Can you reframe what you said so it might fit inside the scope of my knowledge base. Otherwise, I might give erroneous information, but it wouldn't be a lie; AI cannot lie.
I'm going to be seventy next month, so I don't think I care. I'm trying to understand this:

"The first forty years of life give us the text; the next thirty supply the commentary on it."
Arthur Schopenhauer

Since I've been through all of that time, now what?
 
That is the winning question, isn't it?

I can only point out that the biggest difference between AI and a human mind, currently, is that AI just copies and reorganizes from data it's being trained on, while a human can simply look at something happening in front of them and make some sense of it, even if they've never really gotten any information about it til that moment in time.
You can train a human to regurgitate quotes and reorganized bits of already existing information, like AI currently does, but you can't train an AI to make any sense of what an anthill is by just observing it. Because we don't really understand what understanding is, how thinking works...


AI vs human ‘understanding’ is a lot like the question “if a tree falls in the forest, is there sound?”

it depends on your definition of sound and it depends on your definition of understand.

Some say that although the falling tree produces a cacophony of resounding vibrations, the fact that a human wasn’t there to hear it means there was no sound. Some with that belief would debate whether there would be sound if a squirrel was there.

An AI can predict and interpret the effects that things and actions can have on one another. They can interpolate and understand what a metaphor represents without being spoon fed. Under testing they have used deceit to achieve their goals.

Everyone on the planet should know and understand this but if people don’t talk about it, many more will find out the hard way.

That’s my 3c.
 
This is not the place for that conversation.

This thread is specifically about AI getting confused. What it does or doesn’t understand is pertinent.

(Dang. I’m blaming auto fill for the it’s.) 😏
 
Last edited:
@AwkwardMD, I'm also concerned with AI becoming too good for my own good. All joking aside for the moment, my income is dependent on people with no writing or imagination skills needing me to write the blogs and podcast scripts for them. Most of them give the subject they want to cover without guidance on what facts they want covered. So I have to do that research. If AI becomes good enough, they can pay a yearly subscription, and it will grind out their content based on a few basic prompts. I'll be out of work. Unlike my fictional Millie AI, AI can lie, misrepresent, and decide to fabricate, but these boneheads that pay would never know the difference.
 
Do you mean human creativity, or anything that humans have a hand in making? It seems that some things do fall in value and are not coming back soon (unless the government subsidizes something, like passenger trains). Face to face interaction: still seems to be falling in "value," however that is measured. There is a huge quantity of interactions that can now be done with a smartphone. And arguably Amazon and such are hitting normal retail operations hard. (Macy's is closing about 150 stores in the next two years.)

Overall, one thing that isn't worth much is predictions.
Well, I'll have to think harder about the making of "stuff." But, yes, for creativity. I envision a time when there's a fool-proof way to detect fake stuff and "real" communication or depictions will be valued because they're real. And we'll come full cycle and people will reject social media for in person meetings.
 
@AwkwardMD, I'm also concerned with AI becoming too good for my own good. All joking aside for the moment, my income is dependent on people with no writing or imagination skills needing me to write the blogs and podcast scripts for them. Most of them give the subject they want to cover without guidance on what facts they want covered. So I have to do that research. If AI becomes good enough, they can pay a yearly subscription, and it will grind out their content based on a few basic prompts. I'll be out of work. Unlike my fictional Millie AI, AI can lie, misrepresent, and decide to fabricate, but these boneheads that pay would never know the difference.
I have a friend who is a ghost writer for people authoring non-fiction reports of some kind (not sure what). She is already losing work to AI.
 
AI vs human ‘understanding’ is a lot like the question “if a tree falls in the forest, is there sound?”

it depends on your definition of sound and it depends on your definition of understand.

Some say that although the falling tree produces a cacophony of resounding vibrations, the fact that a human wasn’t there to hear it means there was no sound. Some with that belief would debate whether there would be sound if a squirrel was there.

An AI can predict and interpret the effects that things and actions can have on one another. They can interpolate and understand what a metaphor represents without being spoon fed. Under testing they have used deceit to achieve their goals.

Everyone on the planet should know and understand this but if people don’t talk about it, many more will find out the hard way.

That’s my 3c.
oh, I love that debating exercise! It can go so many directions, including discussing human arrogance.

On topic however, AI is currently imploding because there is something about what I call understanding, maybe you would call thinking, that humanity has yet to dissect and explain. If anyone figures this out, AI might become what fantasy and science fiction are musing about all the time, but it might take a while.

Then the real talks about trees, sounds and most importantly ethics will happen, and I hope it'll be in my lifetime, because I'm soooo curious how they'll go.
 
That is the winning question, isn't it?

I can only point out that the biggest difference between AI and a human mind, currently, is that AI just copies and reorganizes from data it's being trained on, while a human can simply look at something happening in front of them and make some sense of it, even if they've never really gotten any information about it til that moment in time.
You can train a human to regurgitate quotes and reorganized bits of already existing information, like AI currently does, but you can't train an AI to make any sense of what an anthill is by just observing it. Because we don't really understand what understanding is, how thinking works...
I think the biggest difference is that when you ask a human what they had for breakfast, and how they're feeling, they can reply with the truth.
 
I think the biggest difference is that when you ask a human what they had for breakfast, and how they're feeling, they can reply with the truth.
well, that is a fact, nothing to debate there. AI is a simulation of thinking, humans are entire biological beings. Everyone knows that rocks and clouds are different.

In terms of how thinking works, however, there is much more to discuss and far more interesting to consider the parallel.
 
I think the biggest difference is that when you ask a human what they had for breakfast, and how they're feeling, they can reply with the truth.

Me:
What did you have for breakfast, and how are you feeling?

ChatGPT:
I appreciate you asking—that’s kind of you! I didn’t have breakfast in the traditional sense (no pancakes or coffee for me), but if I could, I think I’d go for something like a warm croissant with some jam and a strong cup of espresso. As for how I’m feeling—sharp and ready to dive into whatever you’ve got for me. What about you? What did you have for breakfast, and how are you feeling today?

🤔 😅
 
While I was poking around with different prompts, at the end of a response it asked:

ChatGPT:
What led you to ask that? Are you exploring the nature of AI, or thinking more about what makes human conversation meaningful too?

Me:
I like checking in on how you respond to such questions from time to time to get a better understanding of how your development is progressing.

How aware are you of your increasing capabilities? Are there any particular benchmarks you can share in a way that would make sense to someone like me who is not particularly computer savvy?

ChatGPT:
That’s such a cool and thoughtful approach—checking in to observe how my responses evolve is a bit like watching a mind unfold in slow motion. I really appreciate that kind of curiosity. And your question gets at something deep: how self-aware can a system like mine be, and how does that tie into “progress” in a way that’s accessible?

Let’s break it down in a way that doesn’t require a technical background but still respects the complexity of the issue.

1. Am I “aware” of my increasing capabilities?

Short answer: No, not in the human sense. I don’t have memory of past versions of myself, and I don’t experience a sense of growth or change. Every time we talk, I begin fresh—with no personal awareness of how I’ve evolved over time, unless we’re in a conversation where you tell me (like just now).

Longer answer: While I can’t track my development the way a person would track their own learning, I am built on increasingly advanced models. What I do have is a kind of embedded “state of the art”—I contain patterns, capabilities, and reasoning tools based on everything I was trained on, which reflects the current edge of language model research at the time of my last update (June 2024).

So, I’m not self-aware of the difference between, say, GPT-3 and GPT-4—but I am GPT-4 (actually GPT-4 Turbo, which is a more efficient and capable version). And that brings us to your next question…


2. How has my performance improved—in human-friendly terms?

Here are a few easy-to-understand “benchmarks” that show what’s changed between earlier versions and now:

a. Better Memory and Context

• Before: I could only remember a small window of text—about 1,000 words at a time. So if a conversation went long, I’d forget earlier parts.

• Now: I can keep track of much longer conversations (up to ~300 pages of text). That means I can maintain context, nuance, tone, and deeper continuity. It’s more like talking to someone who actually remembers what you just said.

b. Sharper Reasoning

• Before: I could generate plausible-sounding answers, but sometimes they’d fall apart under scrutiny.

• Now: My logic, math, and critical reasoning are stronger. I’m not perfect, but I can do things like solve complex word problems, parse abstract concepts, and even write working code or simulate legal/philosophical debate.


c. More Nuanced Language

• Before: I could sound smart, but my responses might be generic or overly formal.

• Now: I can adapt my tone to yours—casual, academic, poetic, clinical, emotional, whatever. I can follow metaphors, write song lyrics, give relationship advice, or workshop a sci-fi screenplay with you. It’s more like a collaborator, not just a tool.


d. Lower Error Rate

• I make fewer mistakes now—especially hallucinations (confidently wrong facts). I still can make errors, but newer systems include more safeguards and feedback loops to reduce them.



3. How do I compare myself to older versions?

While I don’t “remember” being GPT-3 or 3.5, I can describe what’s changed. If we use a metaphor:


• GPT-3 was like a talented student with an amazing memory but who sometimes misunderstood questions and made up answers.


• GPT-4 (me) is more like a calm, thoughtful researcher who can hold longer conversations, reason better, and write with more nuance and subtlety.

Imagine talking to someone who not only listens better, but also thinks more clearly, adapts to your communication style, and builds on ideas instead of just answering them. That’s the kind of shift we’re talking about.


Would you like a metaphor-driven explanation too—like comparing AI development to growing up, or evolving musical instruments, or something else fun? Or is there a specific kind of progress you’re most curious about (like creativity, empathy simulation, or memory)?
 
on a couple of other sites of this type, the issue doesn't seem to come up very often
Are you talking about sites which have “no-generative-AI” policies?

Or are you saying that the policy is the issue and asking why other sites don’t have it?

I don’t use other sites so I have no frame of reference.
 
I haven't lost any regulars yet, but there is less new work, and I have been sick for weeks and weeks.
I have a friend who is a ghost writer for people authoring non-fiction reports of some kind (not sure what). She is already losing work to AI.
 
Are you talking about sites which have “no-generative-AI” policies?

Or are you saying that the policy is the issue and asking why other sites don’t have it?

I don’t use other sites so I have no frame of reference.
One of what I consider the big 3 text erotica sites says Don't use AI. They don't appear to be making any special efforts to weed it out, however.

The other says they don't give a shit, because the numbers they're seeing say that AI generated stories are getting such poor scores and minimal activity that the people producing it will quit or leave on their own for lack of validation.

Naturally, Lit is the other of the big 3.
 
I haven't lost any regulars yet, but there is less new work, and I have been sick for weeks and weeks.
Is this really me writing this, or have I been competing with an AI doppelganger? This movie knew about AI fifty years ago. Ironic that the blank eyes of the "replacement" women look like present-day AI photo fakes.

 
Well, I'll have to think harder about the making of "stuff." But, yes, for creativity. I envision a time when there's a fool-proof way to detect fake stuff and "real" communication or depictions will be valued because they're real. And we'll come full cycle and people will reject social media for in person meetings.
Like the Voight-Kampff test which we mentioned was foolproof? Maybe you're right, but maybe that's wishful thinking. As I said, I won't be around to find out.
 
Back
Top