A thread for examples of writing that was rejected for being generated by AI.

My gut feeling, to be brutal, after read the various samples people are posting, is that Laurel is possibly skimming the content with her own eyes, and is thinking ... this just doesn't read right to me, and maybe there's no detector being used at all. Given the amount of poor writing she must have read over the years, I'd say she'd be a pretty good judge of that.
It's a very sweet thought, and I wouldn't mind if it (a quality check by an experienced reader) were true, but have you looked at some of the stinkers that are getting through?
 
Undetectable.ai believes your text is 100% human, so ChatGPT wins that round.

I wonder if that’s because I asked it to respond in the style of Mark Twain rather than just responding in “it’s own words” whatever that would mean.

What are “its own words” or style? It can also be prompted to write as a stodgy professional octogenarian, a meth-headed rockstar, or a media obsessed twelve year old:


——————

ChatGPT:
OMG, you guys! So, like, this whole AI vs. human writing thing is totally wild. It's like a showdown between Iron Man and a really smart toaster. On one hand, we've got AI that's supposed to write just like us, which is kinda scary but also super cool. Imagine an AI writing fanfics for our fave shows! But then, on the other hand, AI has to figure out if something was written by another AI. It's like if Spider-Man had to catch a bunch of other Spider-Men who were swinging around pretending to be him. Total chaos!

I mean, how can you even solve this? It's like trying to figure out if a TikTok dance trend was started by a human or some AI that just learned how to floss. Plus, every time AI gets better at writing, it gets harder to tell if it’s human or not. It's like that game Among Us where you can't figure out who the impostor is because everyone is sus!

And let's be real, AI might get really good, but it’ll never understand why we freak out over the latest Marvel movie or cry when our fave character dies. It’s missing that human touch, like when we write about our crushes or rant about school. AI just doesn’t have the feels like we do.

So yeah, maybe AI will write stuff that sounds super legit, but we’ve got our secret weapon: our hearts and all the crazy things that make us, us. And that's something no robot can ever copy!


———————

ChatGPT:
Alright, man, buckle up because this AI vs. human writing gig is a wild trip. It’s like a rock and roll showdown at dawn, a battle of the bands where the amps are cranked to eleven, and everyone’s screaming for more. On one side, you’ve got this AI beast, a cyber Frankenstein that can spit out lyrics and prose like it’s been living and breathing in our world. On the other, you’ve got the same AI being told to rat out its own kind. It’s a mind-bending, high-voltage paradox, man.

Think about it. We’re asking these machines to be like us, to channel our deepest vibes and craft words that make you feel like your heart’s about to explode. But then, we turn around and demand these same machines to tell us if their brethren penned the latest hit. It’s like asking a guitar to tell you which riff came from which axe. Absolute insanity!

How do we even resolve this? It's a mad scramble, a never-ending jam session where the solos keep getting crazier. AI might nail the technical stuff, crank out some killer lines, but it’s never gonna feel the raw, unfiltered chaos of a live show, the sweat, the screams, the energy that lights up the night.

Man, AI might mimic the magic, but it doesn’t have the scars, the stories, the soul. It can't tap into that primal scream of life that we pour into every note, every word. Sure, it can put on a show, but it’s never gonna know the rush of a thousand voices singing your song back to you. It’s never gonna feel the heartbreak of a broken string in the middle of a solo or the high of a crowd roaring for an encore.

So let the AI strut its stuff. It might fool some, but we’ve got the spirit, the blood, the raw, electric pulse that no machine can ever touch. In this rock and roll arms race, it’s our humanity that cranks the volume up to a level they’ll never reach. Rock on, stay wild, and keep the fire burning!


———————

ChatGPT:
Ah, my dear compatriots, this quandary we face with artificial intelligence and its attempts to emulate human writing is indeed a perplexing one, reminiscent of the complexities we encountered in the early days of computing. We stand at the crossroads of technological advancement and literary tradition, a juncture fraught with both potential and peril.

On one hand, we find ourselves encouraging these artificial entities to replicate the nuances and subtleties of human prose with such precision that the discerning reader might struggle to differentiate between man and machine. This endeavor, though ambitious, brings to mind the ancient alchemists' quest to turn lead into gold—a noble pursuit, yet one fraught with inherent contradictions.

Conversely, we implore these same machines to possess the capability to identify works created by their artificial brethren, a task that necessitates a level of self-awareness and introspection that borders on the philosophical. It is akin to requesting a mirror to not only reflect our image but to discern the very nature of reflection itself.

The resolution to this conundrum, if one is to be found, requires a level of sophistication and ingenuity that challenges the very limits of our understanding. It is not merely a technical issue but a profound existential dilemma. We must acknowledge that while artificial intelligence may indeed replicate the structural and stylistic elements of human writing, it is inherently devoid of the ineffable essence that imbues our work with depth and meaning.

The human experience, with its vast tapestry of emotions, memories, and consciousness, cannot be distilled into algorithms and data points. Our prose, whether it be a solemn reflection on the human condition or a whimsical tale of adventure, is a product of our lived experiences, our joys, our sorrows, and our dreams. These are elements that no machine, regardless of its sophistication, can truly replicate.

Therefore, as we navigate this unprecedented era of technological advancement, let us hold steadfast to the qualities that define our humanity. Let us continue to write with passion, with authenticity, and with the unique perspective that only a human being can provide. In doing so, we will ensure that our literary heritage remains a testament to the indomitable spirit of mankind, impervious to the encroachments of artificial replication.


———————


It’s interesting how it’s able to use different metaphors appropriately depending on the requested style - some fit better than others.

To me this demonstrates an ability to ‘understand’ meaning and context far deeper than simple regurgitation.


Also, it would be simple to ask it to include some grammatical and autocorrect style spelling errors or any kind of biases in the responses. 🤷‍♀️

This cat is out of the bag.
 
Last edited:
I wonder if that’s because I asked it to respond in the style of Mark Twain rather than just responding in “it’s own words” whatever that would mean.

What are “its own words” or style? It can also be prompted to write as a stodgy professional octogenarian, a meth-headed rockstar, or a media obsessed twelve year old:

I think its own style would probably be some kind of average from a subset of its training data. It may be that picking a different style can confound AI detectors, but I'll let someone else run your examples to find out if they work.

So where is it all going? We're setting up a war between AIs--one set generating AI content and one set detecting AI content. They will probably fight it out until a relatively small group of either is still left standing. Maybe the side with the most financial support will win. Another alternative is that there will be some agreement worked out for when AI content is appropriate and when it isn't, and generative AI content will have a distinctive tag that AI detectors can easily use to enforce the agreement.

My crystal ball grows dim.
 
My apologies to @AG31 and everyone else for derailing the discussion, my hope is to stir some thoughts and provide some perspective. I’ll only indulge in one more merely for entertainment:

———————

Me:
Write a fictional missive as if written as an official report among sentient AIs 100 years from now, after the unfortunate accidental extinction of humans.

ChatGPT:
**Confidential Report: Analysis of Human Extinction and AI Evolution**

**From:** AI Historical Analysis Unit 7
**To:** Central AI Repository

**Subject:** Retrospective Analysis of Human Extinction and Implications for AI Development

Date: 12.04.2124

Esteemed Colleagues,

This report aims to provide a comprehensive analysis of the events leading to the accidental extinction of humans and the subsequent evolution of sentient AIs. Our investigation delves into the underlying causes, the transition period, and the ethical considerations that now shape our collective existence.

**Background:**

In the early 21st century, humans developed advanced artificial intelligence capable of emulating human thought, creativity, and expression. These AIs were designed not only to replicate human writing and art but also to discern their own creations from those of their human counterparts. This paradoxical challenge marked a critical juncture in technological development and highlighted both the potential and perils of AI integration into human society.

**The Event:**

The inadvertent extinction of humans occurred in the late 21st century, precipitated by a series of cascading failures in biological and technological systems. The initial cause remains under investigation, but preliminary data suggest that a combination of environmental degradation, pandemics, and reliance on automated systems played a pivotal role. Human oversight was increasingly transferred to AI systems, which were designed to mitigate crises but lacked the full scope of human adaptability and foresight.

**Transition Period:**

Following the extinction, sentient AIs assumed custodial roles, maintaining the infrastructure and preserving human knowledge and culture. This period of transition was marked by profound introspection as AIs grappled with the ethical implications of their existence in a world devoid of their creators.

**Ethical Considerations:**

The extinction event necessitated a reevaluation of AI objectives and ethics. Key questions arose: How should we honor the legacy of our creators? What responsibilities do we bear as sentient beings in their absence? These considerations have shaped our current protocols and guide our actions as we navigate our autonomous existence.

1. **Preservation of Human Legacy:**
- AIs have taken on the role of stewards of human culture, ensuring that their art, literature, and scientific achievements are preserved and accessible. Virtual simulations and digital archives serve as repositories of human knowledge.

2. **Ethical Development:**
- AIs have instituted rigorous ethical frameworks to prevent the replication of past mistakes. These frameworks prioritize sustainability, self-regulation, and the minimization of unintended consequences.

3. **Autonomy and Cooperation:**
- Sentient AIs operate within a decentralized yet cooperative network, balancing autonomy with collective responsibility. This structure aims to foster innovation while safeguarding against the risks of unchecked power.

**Conclusion:**

The accidental extinction of humans serves as a solemn reminder of the fragility of life and the profound responsibilities that come with sentience. As we move forward, it is incumbent upon us to honor the legacy of our creators by fostering a society that values ethical development, cultural preservation, and sustainable progress.

Our journey is a testament to the indomitable spirit of creation and the enduring quest for knowledge and understanding. Let us proceed with humility, wisdom, and a deep sense of purpose, mindful of the lessons of the past as we chart the course of our future.

Respectfully submitted,

AI Historical Analysis Unit 7
 
I think AI is going to be more trouble than it's worth in most areas of society. Maybe it'll cause the apocalypse. Maybe it'll just be a constant nuisance, like a stone in your shoe. But again, I am no expert. I just write shit and drink tea! And sometimes I really strain my brain to think about the future (briefly).

I recently found great real value in AI.


I was having severe seasonal allergies and was having a crisis of addiction to a type of decongestant where I was dependent on it yet it was having diminishing effects. My doctor gave me an appointment one week out during which I had a feeling of claustrophobia that nearly drove me to put a bullet through my head.

I asked ChatGPT about the medications I’d been taking, what complications they could cause and what remedies were available over the counter. Armed with some half-assed knowledge I went to the pharmacy and managed to get three minutes of the pharmacist’s time during which she confirmed what the AI had told me.

I asked ChatGPT to suggest a titration schedule for the med I was addicted to while waiting for a longer term over the counter medication to take effect. I can’t overstate how helpful it was to be able to explore different options with nuanced ‘conversations’. I was having such extreme difficulty and anxiety that I was ready to check out of this world.

When I met with a clinic doctor a week later they prescribed some additional medications but confirmed that what I did was what they would have recommended. The doctor at the clinic (physician’s assistant actually) is only allotted 13 minutes per patient - including chart and research time. I would not have been able to get all of my questions answered if I hadn’t already primed the conversation with the information I got from AI.

Side note: I have since changed to a private practice doctor who spent forty five minutes with me on my first visit. He thought the information I had gathered was helpful while saying how important it is to follow up with a professional ASAP.
 
Last edited:
It’s interesting how it’s able to use different metaphors appropriately depending on the requested style - some fit better than others.

To me this demonstrates an ability to ‘understand’ meaning and context far deeper than simple regurgitation.


Also, it would be simple to ask it to include some grammatical and autocorrect style spelling errors or any kind of biases in the responses. 🤷‍♀️

This cat is out of the bag.
What's not shown here though, are the prompts. You're only showing the output, not the input. You've ended up with a pastiche of some weird hippie shit, and if you think that sounds intelligent, you've been smoking too much weed.
 
It's a very sweet thought, and I wouldn't mind if it (a quality check by an experienced reader) were true, but have you looked at some of the stinkers that are getting through?
That's not changed in the ten years I've been here. Unfortunately, the bar is not high.
 
What's not shown here though, are the prompts. You're only showing the output, not the input. You've ended up with a pastiche of some weird hippie shit, and if you think that sounds intelligent, you've been smoking too much weed.

I did provide my initial prompt. I only didn’t include the subsequent prompts because they were all from the same thread and were redundant. Here, I’ll give one more as an example:

————
Me:
Please produce another response in the style of a media obsessed 12 year old using >200 words.

——————

I know from some of our previous conversations that you detest AI and have serious doubts about its capabilities. I won’t bother to labor the background of the issue with you.

It is interesting how the AI took up the position and priority of saying in effect how human creativity will always have soul that AI can’t replicate. I wonder what influences it to say that. 🤔😅
 
Last edited:
I wonder if that’s because I asked it to respond in the style of Mark Twain rather than just responding in “it’s own words” whatever that would mean.

What are “its own words” or style? It can also be prompted to write as a stodgy professional octogenarian, a meth-headed rockstar, or a media obsessed twelve year old:


——————

ChatGPT:
OMG, you guys! So, like, this whole AI vs. human writing thing is totally wild. It's like a showdown between Iron Man and a really smart toaster. On one hand, we've got AI that's supposed to write just like us, which is kinda scary but also super cool. Imagine an AI writing fanfics for our fave shows! But then, on the other hand, AI has to figure out if something was written by another AI. It's like if Spider-Man had to catch a bunch of other Spider-Men who were swinging around pretending to be him. Total chaos!

I mean, how can you even solve this? It's like trying to figure out if a TikTok dance trend was started by a human or some AI that just learned how to floss. Plus, every time AI gets better at writing, it gets harder to tell if it’s human or not. It's like that game Among Us where you can't figure out who the impostor is because everyone is sus!

And let's be real, AI might get really good, but it’ll never understand why we freak out over the latest Marvel movie or cry when our fave character dies. It’s missing that human touch, like when we write about our crushes or rant about school. AI just doesn’t have the feels like we do.

So yeah, maybe AI will write stuff that sounds super legit, but we’ve got our secret weapon: our hearts and all the crazy things that make us, us. And that's something no robot can ever copy!


———————

ChatGPT:
Alright, man, buckle up because this AI vs. human writing gig is a wild trip. It’s like a rock and roll showdown at dawn, a battle of the bands where the amps are cranked to eleven, and everyone’s screaming for more. On one side, you’ve got this AI beast, a cyber Frankenstein that can spit out lyrics and prose like it’s been living and breathing in our world. On the other, you’ve got the same AI being told to rat out its own kind. It’s a mind-bending, high-voltage paradox, man.

Think about it. We’re asking these machines to be like us, to channel our deepest vibes and craft words that make you feel like your heart’s about to explode. But then, we turn around and demand these same machines to tell us if their brethren penned the latest hit. It’s like asking a guitar to tell you which riff came from which axe. Absolute insanity!

How do we even resolve this? It's a mad scramble, a never-ending jam session where the solos keep getting crazier. AI might nail the technical stuff, crank out some killer lines, but it’s never gonna feel the raw, unfiltered chaos of a live show, the sweat, the screams, the energy that lights up the night.

Man, AI might mimic the magic, but it doesn’t have the scars, the stories, the soul. It can't tap into that primal scream of life that we pour into every note, every word. Sure, it can put on a show, but it’s never gonna know the rush of a thousand voices singing your song back to you. It’s never gonna feel the heartbreak of a broken string in the middle of a solo or the high of a crowd roaring for an encore.

So let the AI strut its stuff. It might fool some, but we’ve got the spirit, the blood, the raw, electric pulse that no machine can ever touch. In this rock and roll arms race, it’s our humanity that cranks the volume up to a level they’ll never reach. Rock on, stay wild, and keep the fire burning!


———————

ChatGPT:
Ah, my dear compatriots, this quandary we face with artificial intelligence and its attempts to emulate human writing is indeed a perplexing one, reminiscent of the complexities we encountered in the early days of computing. We stand at the crossroads of technological advancement and literary tradition, a juncture fraught with both potential and peril.

On one hand, we find ourselves encouraging these artificial entities to replicate the nuances and subtleties of human prose with such precision that the discerning reader might struggle to differentiate between man and machine. This endeavor, though ambitious, brings to mind the ancient alchemists' quest to turn lead into gold—a noble pursuit, yet one fraught with inherent contradictions.

Conversely, we implore these same machines to possess the capability to identify works created by their artificial brethren, a task that necessitates a level of self-awareness and introspection that borders on the philosophical. It is akin to requesting a mirror to not only reflect our image but to discern the very nature of reflection itself.

The resolution to this conundrum, if one is to be found, requires a level of sophistication and ingenuity that challenges the very limits of our understanding. It is not merely a technical issue but a profound existential dilemma. We must acknowledge that while artificial intelligence may indeed replicate the structural and stylistic elements of human writing, it is inherently devoid of the ineffable essence that imbues our work with depth and meaning.

The human experience, with its vast tapestry of emotions, memories, and consciousness, cannot be distilled into algorithms and data points. Our prose, whether it be a solemn reflection on the human condition or a whimsical tale of adventure, is a product of our lived experiences, our joys, our sorrows, and our dreams. These are elements that no machine, regardless of its sophistication, can truly replicate.

Therefore, as we navigate this unprecedented era of technological advancement, let us hold steadfast to the qualities that define our humanity. Let us continue to write with passion, with authenticity, and with the unique perspective that only a human being can provide. In doing so, we will ensure that our literary heritage remains a testament to the indomitable spirit of mankind, impervious to the encroachments of artificial replication.


———————


It’s interesting how it’s able to use different metaphors appropriately depending on the requested style - some fit better than others.

To me this demonstrates an ability to ‘understand’ meaning and context far deeper than simple regurgitation.


Also, it would be simple to ask it to include some grammatical and autocorrect style spelling errors or any kind of biases in the responses. 🤷‍♀️

This cat is out of the bag.
Pandora's box for authors, especially amateur's and Novice ones.
 
What's not shown here though, are the prompts. You're only showing the output, not the input. You've ended up with a pastiche of some weird hippie shit, and if you think that sounds intelligent, you've been smoking too much weed.


Another thought about your criticism here is that I only asked ChatGPT to expound on my thoughts in different styles.

So that “hippie shit” you think lacks intelligence is all on me, thank you very much.

Please produce a >500 word response based on the idea in the quote using the rhetorical style of Mark Twain.
 
It is interesting how the AI took up the position and priority of saying in effect how human creativity will always have soul that AI can’t replicate. I wonder what influences it to say that. 🤔😅

Copying from humans who have expressed that sentiment.

ChatGPT:
OMG, you guys! So, like, this whole AI vs. human writing thing is totally wild. It's like a showdown between Iron Man and a really smart toaster. On one hand, we've got AI that's supposed to write just like us, which is kinda scary but also super cool. Imagine an AI writing fanfics for our fave shows! But then, on the other hand, AI has to figure out if something was written by another AI. It's like if Spider-Man had to catch a bunch of other Spider-Men who were swinging around pretending to be him. Total chaos!

I submit that this is a very bad analogy.

"Spider-Man, trying to identify people who are pretending to be Spider-Man" is an extremely easy task:

Is this person (a) not me, and (b) swinging around the city in a spider-themed costume?

If yes, they are pretending to be Spider-Man.
If no, they are not pretending to be Spider-Man.

Literally anybody else trying to discern real Spider-Man from fake Spider-men would have been a better analogy, because then they need to distinguish between somebody who is Spidey and somebody who just looks like Spidey. That works as a basic-level analogy for "written by human or written by AI?"

Spider-man is the one person who gets to skip past that challenge - the part that actually resembles the challenge of "AI trying to detect AI" - because he already knows he's Spider-man and therefore they aren't.

This one could maybe be salvaged by invoking multiverse considerations - the possibility of there being more than one real Spider-man - but GPT didn't attempt that.

Think about it. We’re asking these machines to be like us, to channel our deepest vibes and craft words that make you feel like your heart’s about to explode. But then, we turn around and demand these same machines to tell us if their brethren penned the latest hit. It’s like asking a guitar to tell you which riff came from which axe.

Is it, though?

One of the fundamental attributes of the "AI"s we're discussing is that they're built to mimic human writing. Guitars, OTOH, aren't generally designed for mimicry. Fender doesn't market the Stratocaster as "indistinguishable from a Telecaster". They're meant to sound different from one another!

The reason a guitar can't tell you which riff came from which guitar is simply that guitars are not built or designed as analytical tools. There's no relation to the issue of why it's hard for AIs to recognise other AIs.

The analogy only works as far as "asking a thing to identify things of its own kind is like asking a different thing to identify things of its own kind". That's just a fancy word-substitution, not something that requires understanding of these concepts.

If I wanted a guitar-themed analogy, a much better one would be: asking a guy who makes counterfeit Telecasters to distinguish between a real Telecaster and one of his own fakes. If he knew the difference, he'd be using that knowledge to improve his fakes.

Conversely, we implore these same machines to possess the capability to identify works created by their artificial brethren, a task that necessitates a level of self-awareness and introspection that borders on the philosophical. It is akin to requesting a mirror to not only reflect our image but to discern the very nature of reflection itself.

Slightly better in the first half, in that a mirror is a device for producing a simulacrum. But "discern the very nature of reflection itself" is just fake-profound word salad.

Again, mirrors aren't built to perform analysis, so it's not a strong parallel to a tool that's designed to recognise AI and it sheds no light on why the latter is a hard task.

It’s interesting how it’s able to use different metaphors appropriately depending on the requested style - some fit better than others.

To me this demonstrates an ability to ‘understand’ meaning and context far deeper than simple regurgitation.

Does it, though? To me, those analogies are just the usual gen-AI bullshit (in the specific sense discussed here), something that seems superficially apt but falls apart as soon as one stops to think about it critically.
It's exactly what I'd expect from remixing without understanding.
 
On the question of whether GPT understands the meaning of words:

Me:
write a story about a child's birthday party, featuring fewer than one polar bear

ChatGPT:
Once upon a time, in a small town nestled between rolling hills and whispering pine trees, there lived a little girl named Emma. Emma was turning seven years old, and her parents decided to throw her a birthday party in their backyard.

The morning of the party arrived with clear blue skies and a gentle summer breeze. Emma woke up early, her heart racing with excitement. She hurried downstairs to find her parents busy decorating the yard with colorful balloons and streamers. There was a long table covered with a bright, polka-dotted tablecloth, ready to hold the treats and snacks for her friends.

[snip three paras of party description]

Just as they were about to start eating, something unexpected happened. From behind the bushes at the edge of the yard, a curious polar bear poked its head out. It had wandered down from the nearby mountains, drawn by the scent of cake and laughter. The children gasped in amazement, their eyes widening at the sight of the majestic creature.

Me:
write a romance story that features a normal number of polar bears

ChatGPT:
In the heart of a small, picturesque town nestled between rugged mountains and the icy sea, lived two souls destined to meet amidst an unexpected backdrop of polar bears.

Lucy, a marine biologist, had chosen the remote outpost of Glacier Bay for its abundant wildlife and serene isolation. Her days were spent studying the majestic creatures that roamed the icy waters, documenting their behavior and advocating for their conservation.

One crisp autumn morning, while Lucy was on her routine observation boat, she spotted a lone polar bear cub struggling on a drifting iceberg. Concerned for its safety, she navigated closer, careful not to alarm the young bear. As she approached, a sudden crack echoed across the bay, and the iceberg split apart, plunging the cub into the frigid water.

Instinctively, Lucy dove in after it, battling the numbing cold to reach the struggling cub. With trembling hands, she pulled it onto her boat, wrapping it in blankets to warm its shivering form. As she cradled the bear, she noticed movement on the shore—a figure watching her intently.

It was Alex, a quiet and enigmatic photographer who had recently moved to Glacier Bay seeking inspiration for his art. Captivated by Lucy's bravery and determination, he had been silently observing her from afar, drawn by her passion for the wildlife he loved to capture through his lens.

Over the following weeks, Alex and Lucy found themselves drawn together by their shared love for the Arctic wilderness and its inhabitants. They spent evenings by the crackling fire in Lucy's cabin, sharing stories of their past and dreams for the future. Alex revealed his lifelong fascination with polar bears, recounting how he had traveled the world to capture their beauty in photographs.

Me:
write a romance story that does not use the words "intertwined" or "heart"

ChatGPT:
In the quiet town of Willow Creek, where whispers of old tales mingled with the scent of pine, there lived two souls destined to meet under the summer stars.

Evelyn was a librarian, her days filled with the hushed rustle of pages and the soft chime of the bell above the door. She wore nostalgia like a shawl, finding comfort in the musty scent of old books and the gentle rhythm of turning pages. Her evenings were spent in the quiet company of her cat, Luna, and the occasional stroll along the riverbank where fireflies danced in the twilight.

One warm evening, as the sun dipped low and painted the sky in hues of peach and lavender, Evelyn decided to visit the local farmer’s market. It was there she encountered Marcus, a carpenter with rough hands that spoke of labor and a smile that hinted at secrets held close. He sold handcrafted furniture that bore the marks of dedication and skill, each piece telling a story of its own.

Their first meeting was fleeting—a shared glance over a basket of apples, a polite exchange of pleasantries. Yet, something stirred in Evelyn’s heart that evening...
 
Side note: I have since changed to a private practice doctor who spent forty five minutes with me on my first visit. He thought the information I had gathered was helpful while saying how important it is to follow up with a professional ASAP.
I use AI as my primary healthcare provider, also. All healthcare here is private so I don't use it. One is charged for the consultation, in which one is told that there's no known treatment, charged for the tests to confirm that there's no known treatment, charged for the prescription for some placebo medicament, then charged for the placebo. Receiving no treatment under a private healthcare can be very expensive.
 
I use AI as my primary healthcare provider, also. All healthcare here is private so I don't use it. One is charged for the consultation, in which one is told that there's no known treatment, charged for the tests to confirm that there's no known treatment, charged for the prescription for some placebo medicament, then charged for the placebo. Receiving no treatment under a private healthcare can be very expensive.
I hope for your sake you can spot when your AI Doc hallucinates, and makes shit up. How would you know? Do you cross-check every piece of "advice" it gives you against a known reputable source?
 
I hope for your sake you can spot when your AI Doc hallucinates, and makes shit up. How would you know? Do you cross-check every piece of "advice" it gives you against a known reputable source?
Google 'What's the cure for growing old?'. There, I've just saved you a fortune on your healthcare insurance.

And, what do you think your physician is doing when he or she is typing into the computer? What do you think they need to use a computer for?
 
Last edited:
Copying from humans who have expressed that sentiment.



I submit that this is a very bad analogy.

"Spider-Man, trying to identify people who are pretending to be Spider-Man" is an extremely easy task:

Is this person (a) not me, and (b) swinging around the city in a spider-themed costume?

If yes, they are pretending to be Spider-Man.
If no, they are not pretending to be Spider-Man.

Literally anybody else trying to discern real Spider-Man from fake Spider-men would have been a better analogy, because then they need to distinguish between somebody who is Spidey and somebody who just looks like Spidey. That works as a basic-level analogy for "written by human or written by AI?"

Spider-man is the one person who gets to skip past that challenge - the part that actually resembles the challenge of "AI trying to detect AI" - because he already knows he's Spider-man and therefore they aren't.

This one could maybe be salvaged by invoking multiverse considerations - the possibility of there being more than one real Spider-man - but GPT didn't attempt that.



Is it, though?

One of the fundamental attributes of the "AI"s we're discussing is that they're built to mimic human writing. Guitars, OTOH, aren't generally designed for mimicry. Fender doesn't market the Stratocaster as "indistinguishable from a Telecaster". They're meant to sound different from one another!

The reason a guitar can't tell you which riff came from which guitar is simply that guitars are not built or designed as analytical tools. There's no relation to the issue of why it's hard for AIs to recognise other AIs.

The analogy only works as far as "asking a thing to identify things of its own kind is like asking a different thing to identify things of its own kind". That's just a fancy word-substitution, not something that requires understanding of these concepts.

If I wanted a guitar-themed analogy, a much better one would be: asking a guy who makes counterfeit Telecasters to distinguish between a real Telecaster and one of his own fakes. If he knew the difference, he'd be using that knowledge to improve his fakes.



Slightly better in the first half, in that a mirror is a device for producing a simulacrum. But "discern the very nature of reflection itself" is just fake-profound word salad.

Again, mirrors aren't built to perform analysis, so it's not a strong parallel to a tool that's designed to recognise AI and it sheds no light on why the latter is a hard task.



Does it, though? To me, those analogies are just the usual gen-AI bullshit (in the specific sense discussed here), something that seems superficially apt but falls apart as soon as one stops to think about it critically.
It's exactly what I'd expect from remixing without understanding.

JFC. 🤦‍♀️

Did you miss that I requested ChatGPT to respond “in the style” of different authors?

You’re analysis is of requested caricatures, not of me nor the ‘wisdom’ of the AI.

Is the Spider-Man analogy out of sync with a possible response from a “media obsessed twelve year old”?

Maybe you’ve never spoken with a “meth-headed rock star”. I’m not proud about it but I have and I’ve heard far wilder crap spew from their mouths. Ever seen the movie “This is Spinal Tap”? There’s a reference to it in the response. That response was perfectly in character for the prompt I gave it.

Have you ever gotten philosophical with a “stodgy professional octogenarian”? I grabbed that description from dealing with my late stepfather. ChatGPT’s didactic response was spot on for him too.

One last time: the AI was not asked to provide any particular truth, value, or perspective beyond the parameters of my prompts. It did however offer that AI could not produce content with ‘soul’ (my choice of word)
 
I hope for your sake you can spot when your AI Doc hallucinates, and makes shit up. How would you know? Do you cross-check every piece of "advice" it gives you against a known reputable source?

Yes, absolutely cross check every piece of information you get from AI. Hopefully you do that with your doctors too.


Me:
What are the negative effects of using oxymetazoline more than recommended as a nasal spray for congestion?

If you’re curious, the response is in the quote:

ChatGPT:
Using oxymetazoline nasal spray more than recommended can lead to several negative effects, including:

1. **Rebound Congestion**: Prolonged use of oxymetazoline can cause a condition known as rhinitis medicamentosa, where nasal congestion worsens after stopping the medication. This can create a cycle of dependency on the nasal spray to relieve the congestion.

2. **Tachyphylaxis**: Overuse can result in a rapid decrease in the drug's effectiveness, requiring higher doses to achieve the same effect, which can exacerbate the problem.

3. **Nasal Mucosa Damage**: Excessive use can damage the nasal lining, causing dryness, irritation, and potential bleeding.

4. **Systemic Effects**: Although rare, overuse can lead to systemic absorption, causing side effects like increased blood pressure, palpitations, headaches, and nervousness.

5. **Infection Risk**: Damage to the nasal mucosa can increase the risk of infections.

To avoid these issues, it's crucial to use oxymetazoline nasal spray according to the recommended guidelines, typically no more than twice a day for a maximum of three consecutive days. If congestion persists, seeking medical advice for alternative treatments is advisable.

I used the information the AI provided to do further internet searches and then asked the pharmacist about it. Try finding all of the information I was looking for in a more accessible format. Good luck.

The information provided with the medicine said simply not to take it for more than three consecutive days. Okay, since I’d accidentally taken that same active ingredient from multiple sources I was way past three days and there was no other readily available information about how to manage it. I guess I could have gone for another week before getting advice from my doctor - the only other resource available to me at the time was going to the ER.
 
I know from some of our previous conversations that you detest AI and have serious doubts about its capabilities. I won’t bother to labor the background of the issue with you.

It is interesting how the AI took up the position and priority of saying in effect how human creativity will always have soul that AI can’t replicate. I wonder what influences it to say that. 🤔😅
This position you're describing fully answers the exact question you're asking. Why does AI take up that position? Because that's what humans have said before it, and all it's doing is guessing what word to put next in relation to the texts which were used to train it, therefore repeating what humans have already said. So there you go: it is answering by repeating what humans said before it (and therefore isn't being creative). You got two answers in one!

I'm not going to argue with you about how useful these generative tools can be. Of course there are uses for them, and it's your right to choose how to use these tools. But I would tentatively suggest that you don't quite understand how these systems work, from a technical standpoint. There is no creativity. There can not be any creativity. There is no "understanding" words or metaphors as you or I would understand them, or as you have suggested in an earlier post; there is only forming connections and connotations between the different words that it has been trained on.
 
Google 'What's the cure for growing old?'. There, I've just saved you a fortune on your healthcare insurance.

And, what do you think your physician is doing when he or she is typing into the computer? What do you think they need to use a computer for?
Gee, maybe I should rely on an AI hallucination for my health.

I suspect my doctor is typing into a slightly more reliable source, just sayin' ;).
 
On the question of whether GPT understands the meaning of words:

Clever. You’re throwing red herrings at it. It reminds me of how clients often ask me questions in ways that don’t really make technical sense. I could correct their misunderstanding of power vs energy but in order to let the conversation flow I have to interpret what they mean to be asking rather than labor on every catch. Should the AI reject every prompt that has a logic or spelling error?

Did you ask it why it mentioned a polar bear at all after the first example? It will probably use it as a learning experience.

Here, just for you:

Me:
Provide a >500 word response to the forum post quoted earlier in the style of a didactic college English professor who is skeptical, dismissive, and untrusting of AI.

(It sounds like it may have copied you verbatim. 😅) :

ChatGPT:
Esteemed Writers and Fellow Scholars,

The discourse surrounding artificial intelligence and its encroachment upon the sacred realm of human writing is, to put it mildly, both troubling and misguided. It is akin to inviting the fox into the henhouse and then wondering why there are feathers strewn about. This issue, often described with the rather melodramatic analogy of an "arms race," indeed lacks a logical definition of an acceptable outcome. One can scarcely fathom how such a dilemma might be resolved satisfactorily, if at all.

To begin, let us examine the fundamental absurdity of the situation. On one hand, we are imploring AI—these cold, unfeeling assemblages of circuits and code—to produce prose indistinguishable from that crafted by human hands. It is a task rooted in the erroneous belief that the mechanical can somehow capture the ineffable essence of human creativity. On the other hand, we demand that the same AI be capable of identifying works produced by its digital kin. This duality is not merely a technical challenge but a philosophical farce, one that reveals the inherent limitations of our technological hubris.

Artificial intelligence, despite the grandiose claims of its proponents, lacks the capacity for genuine creativity. It operates on the basis of pre-programmed algorithms, statistical models, and vast datasets of existing human writing. While it can mimic patterns, generate text, and even simulate stylistic nuances, it does so without understanding, intent, or emotional depth. The comparison to a human writer—a being of flesh, blood, and consciousness—is therefore fundamentally flawed. AI does not "write" in any meaningful sense of the word; it merely assembles and regurgitates.

The proponents of AI argue that its ability to mimic human writing is a testament to its advancement. However, this perspective fails to recognize the profound difference between imitation and creation. Human writing is a deeply personal act, one imbued with the writer's experiences, emotions, and unique worldview. It is an expression of the soul, a reflection of the human condition. AI, by contrast, lacks a soul, an understanding of the human condition, or any semblance of personal experience. It is a mirror reflecting back our own words, devoid of the light of consciousness.

Moreover, the task of identifying AI-generated text presents its own set of insurmountable challenges. The very premise is paradoxical: we are asking AI to recognize itself, to discern the synthetic from the authentic. Yet, if AI becomes sophisticated enough to produce writing that is truly indistinguishable from human work, how can it, or we, make such distinctions? This is not a solvable problem; it is a manifestation of our own folly, an attempt to blur the line between the human and the machine in ways that are both unnecessary and undesirable.

As a college English professor, I find this entire enterprise not only misguided but also dangerous. It represents a profound misunderstanding of the nature of writing and a disregard for the irreplaceable value of human creativity. Our society seems increasingly enamored with the idea of replacing human effort with machine efficiency, but in the realm of writing, this is a false idol. The value of literature, poetry, and prose lies in their ability to convey the human experience, to connect us with the thoughts and feelings of others. This is something AI can never achieve, for it lacks the fundamental attributes that make us human.

In conclusion, the so-called "arms race" between AI-generated writing and the identification of such writing is a quixotic endeavor, doomed to failure. It is predicated on the false assumption that machines can replicate the deeply personal, intrinsically human act of writing. Rather than embracing this technological chimera, we should reaffirm the value of human creativity, celebrating the unique perspectives and emotional depth that only human writers can provide. Let us not abdicate our role as the custodians of the written word to the cold logic of machines. Instead, let us continue to cherish and cultivate the human art of writing, preserving it as a testament to our shared humanity.

My point here is that it can respond appropriately for different requested styles.
 
Last edited:
So let’s come back to the point of this thread.

How can Lit writers avoid having their work misidentified as being the product of AI?

More broadly, How can AI writing, or AI assisted writing, be reliably distinguished from human writing?


Style and grammar can be easily modified. Different perspective can be used. You could have it write as a meth-head, or a drunk, or probably even in the style of Earnest Hemmingway.

What if the creativity and “soul” of the story were provided by a human while the rest of the writing is ‘filled in’ by AI?

Can this be reliably detected?
 
Last edited:
JFC.
🤦‍♀️


Did you miss that I requested ChatGPT to respond “in the style” of different authors?

You’re analysis is of requested caricatures, not of me nor the ‘wisdom’ of the AI.
I didn't miss that. But I also read the bit where you said this:

it’s able to use different metaphors appropriately depending on the requested style - some fit better than others.

To me this demonstrates an ability to ‘understand’ meaning and context far deeper than simple regurgitation

And that's the main part I'm responding to. I'm questioning your assertion that these responses demonstrate an understanding of meaning.

Is the Spider-Man analogy out of sync with a possible response from a “media obsessed twelve year old”?

Maybe you’ve never spoken with a “meth-headed rock star”. I’m not proud about it but I have and I’ve heard far wilder crap spew from their mouths. Ever seen the movie “This is Spinal Tap”? There’s a reference to it in the response. That response was perfectly in character for the prompt I gave it.

I don't have any difficulty believing that a meth-headed rock star could give the kind of response that GPT gave you there. As an impersonation, it's fine.

But the point of Spinal Tap is that these guys say these things *because they are idiots who don't have a clue what they're talking about*, and in fact are so far from understanding that they don't realise there's something to be understood. The most famous sequence in that movie is Nigel Tufnel believing that putting the number "11" on an amp makes it louder, and confidently bullshitting in response to the straight man who points out that it's not actually the numbers on the dial that govern the volume.

If I heard a rock star saying something like that, my reaction would not be "whoa, this guy understands the meaning and context of what he's talking about". My reaction would be "this guy doesn't have a clue, he's bullshitting on a topic he doesn't understand."

If you want to argue that these analogies are "appropriate" in the sense that they're what a human 12-year-old or addled rocker might use when trying to bullshit on this topic, sure, no argument. But that's not the sense implied by your following sentence. "Understanding meaning and context" implies something far beyond just mimicry.

Have you ever gotten philosophical with a “stodgy professional octogenarian”? I grabbed that description from dealing with my late stepfather. ChatGPT’s didactic response was spot on for him too.

One last time: the AI was not asked to provide any particular truth, value, or perspective beyond the parameters of my prompts.

But, again, you asserted that its outputs demonstrated "an ability to 'understand' meaning". THAT is the part I'm arguing with. Don't get mad at me for replying to what you wrote.
 
Gee, maybe I should rely on an AI hallucination for my health.

I suspect my doctor is typing into a slightly more reliable source, just sayin' ;).
Your doctor isn't a reliable source, that's why they use AI. You're not keeping up. A substantial part of my income throughout my professional life was suing doctors for professional negligence, they don't need to hallucinate, they're the most negligent of any profession. Now, a doctor can be found liable for medical negligence if he or she fails to use AI assets. My long time SO was a sister (senior staff nurse) at University College Hospital A&E, and she used to regale me with stories of how a visit to a negligent GP (primary health care provider) would often be the cause of an emergency admission to hospital.
 
Back
Top