Thoughts on AI checkers

You’re on the wrong site then.

Nah. "prefer" was meant to say that if a story is ai driven, I'd like to know. Of my choices, I'd much rather read somebody's actual writing. But if somebody needs the tool, it just doesn't bother me. I'd also much always prefer to use my own words, as I can be sure of what I'm saying--even if it's foolish:)
 
After all the thoughts are ours.
But the words - you know, the things that make up the actual writing, the story, the voice, the sounds, the rhythm, the tone, the personalities of the characters, the foreshadowing - the words aren't. They're stolen from other authors and regurgitated without any sense of *why* they're there in the first place.
 
But people who are not great writers or are not good at putting thoughts into words or are not native English speakers, should be allowed to use AI tools. Using AI to rephrase or rewrite their raw messy writings into something publish worthy. I see no harm in it. After all the thoughts are ours.

Making exceptions for certain groups will not solve the problem.
 
But people who are not great writers or are not good at putting thoughts into words or are not native English speakers, should be allowed to use AI tools. Using AI to rephrase or rewrite their raw messy writings into something publish worthy. I see no harm in it. After all the thoughts are ours.
What right are these hypothetical people exercising here?

Should poor athletes be allowed to enter robotic competitors bearing their avatar in the Olympics? I see no harm in it.

Writing is a skill that can be acquired, at least at the basic level with some effort. I think the right you are appealing to is laziness.
 
Writing is a skill that can be acquired, at least at the basic level with some effort. I think the right you are appealing to is laziness.

100% agree. Inspiration is what makes a story sing, with enough practice, the craft of wordsmithing can be mastered
 
But the words - you know, the things that make up the actual writing, the story, the voice, the sounds, the rhythm, the tone, the personalities of the characters, the foreshadowing - the words aren't. They're stolen from other authors and regurgitated without any sense of *why* they're there in the first place.
This - ‘but it’s my thoughts’ - argument is delusional. It’s like hiring a - frankly really, really poor - human author to write for you and claiming the result as your own.
 
I can see no such need arising.

It’s like saying I want to be a senior surgeon, but I can’t be bothered with medical school and all those boring years being a resident. Why can’t I just be a surgeon now?

I get it :) I did the law school thing, and would prefer lawyers actually go to law school (and not use AI to write their briefs). If this were my passion, rather than a place to zone out, I'd be with you. If I were actually writing here, I'd be with you as well. So I really appreciate your perspective!
 
I get it :) I did the law school thing, and would prefer lawyers actually go to law school (and not use AI to write their briefs). If this were my passion, rather than a place to zone out, I'd be with you. If I were actually writing here, I'd be with you as well. So I really appreciate your perspective!
GenAI is way too unreliable for anything important. It’s perfect use case is the one I put it to… writing my annual objectives at work. And even then I have to fix most of what it produces.

I’d never sign my name to a technical report which was even partially LLM-generated. I’d be exposing my employer to potentially significant liability.

I view LLMs as being like your colleague who does the same job as you do, but read one article about a subject that you know nothing about. Maybe something to start with, but laughably ill-informed compared to an actual expert.

Try getting ChatGPT to write something about an area that you know and which is not entirely rote / mainstream. And then laugh at it’s kindergarten-like crap.
 
Try getting ChatGPT to write something about an area that you know and which is not entirely rote / mainstream. And then laugh at it’s kindergarten-like crap.

I've said before that LLMs are programmed to give "a" "useful" result, regardless. I'm looking forward to the day when an AI engine answers, "Gawd. I haven't a fucking clue. Go ask your father."
 
The different apps produce different results. For all it's wonderous prowess we have seen little evidence of actual results. My fear is the exponential "learning" these things are supposedly capable of, one day we will wake up and they have taken over. That we have handed this power to 4 white men is the most frightening prospect for humanity.
 
What right are these hypothetical people exercising here?

Should poor athletes be allowed to enter robotic competitors bearing their avatar in the Olympics? I see no harm in it.

Writing is a skill that can be acquired, at least at the basic level with some effort. I think the right you are appealing to is laziness.

What right are these hypothetical people exercising here?

Should poor athletes be allowed to enter robotic competitors bearing their avatar in the Olympics? I see no harm in it.

Writing is a skill that can be acquired, at least at the basic level with some effort. I think the right you are appealing to is laziness.
You cannot be the gatekeeper. Everyone has the right to write their stories in whatever way they feel comfortable with. It’s not laziness. It’s about using tools, something humans have always done throughout history.
 
My fear is the exponential "learning" these things are supposedly capable of
That is GenAI company marketing BS.

Did you see that Disney just canned their OpenAI partnership to produce new animated features? It didn’t work. It can produce some compelling short vids, but Disney - to the credit - worked out they had been conned by snake oil salesmen and did something about it.

Know what the current oil price spike is going to do to data center cost (most are run on fossil fuel)? Know what the lack of helium from the Middle East is going to do to chip production? Know what will happen when China decides that anything the US does it can do too and invades Taiwan, leaving South Korea as effectively the only chip maker?

Know what happens when enough deluded CEOs (and the AI bubble is so the creation of the cult of the king-like CEO) realize what Disney did?

Thing is, given the financial shenanigans of Private Equity and Venture Capital trying to squeeze the last dollar out of their AI investments, when the bubble burst it will take a lot of regular folk with it.
 
You cannot be the gatekeeper. Everyone has the right to write their stories in whatever way they feel comfortable with. It’s not laziness. It’s about using tools, something humans have always done throughout history.
I’m sorry, calling me names is not going to boost your cause. Is stealing tools that other people have crafted OK by you? This is a specious argument, and I suspect you know it.
 
GenAI is way too unreliable for anything important. It’s perfect use case is the one I put it to… writing my annual objectives at work. And even then I have to fix most of what it produces.

I’d never sign my name to a technical report which was even partially LLM-generated. I’d be exposing my employer to potentially significant liability.

I view LLMs as being like your colleague who does the same job as you do, but read one article about a subject that you know nothing about. Maybe something to start with, but laughably ill-informed compared to an actual expert.

Try getting ChatGPT to write something about an area that you know and which is not entirely rote / mainstream. And then laugh at it’s kindergarten-like crap.

I would much rather read a human generated story. But I also don't care if somebody wants to post dribble through ChatGPT or anything else. Their name (even if it's a made up name on a website) gets with it, and if their writing (or their ChatGPT writing) sucks, I don't care. Where I do care is if the work (even the AI work) is plagiarized from something else. And my struggle is that if the AI has plagiarized something, the "author" has plagiarized something (side note, in writing this post I just learned I didn't know how to properly spell plagiarized, and bungled it three times before getting it right in this parenthetical).

I guess I feel like if somebody wants to put their name on something--and that something is crap, meh.
 
EmilyMiller, I am in 100% agreement with you. The only "innovation" that takes far more energy to produce something is dubious at best. The chips will be obsolete in 5 years, the water, and pollution from generating power will be catastrophic for communities in the area, and the rest of us. I love how the companies "loan" money to the other companies in the supply chain. The snake eating it's own tail.
 
I would much rather read a human generated story. But I also don't care if somebody wants to post dribble through ChatGPT or anything else. Their name (even if it's a made up name on a website) gets with it, and if their writing (or their ChatGPT writing) sucks, I don't care. Where I do care is if the work (even the AI work) is plagiarized from something else. And my struggle is that if the AI has plagiarized something, the "author" has plagiarized something (side note, in writing this post I just learned I didn't know how to properly spell plagiarized, and bungled it three times before getting it right in this parenthetical).

I guess I feel like if somebody wants to put their name on something--and that something is crap, meh.
All AI-generated stories are plagiarized.

The AI companies have been shown to lie (I’m shocked!) They said that training data is never retained, and so no copyright infringement. But recent research has got several LLMs to regurgitate all of Harry Potter word for word (which explains a lot, right?) - it’s tokenized for sure but the copyrighted work is still in there.

What else are they lying about?
 
All AI-generated stories are plagiarized.

The AI companies have been shown to lie (I’m shocked!) They said that training data is never retained, and so no copyright infringement. But recent research has got several LLMs to regurgitate all of Harry Potter word for word (which explains a lot, right?) - it’s tokenized for sure but the copyrighted work is still in there.

What else are they lying about?

The tech world is a lie. We have Ring advertising during the super bowl that they are using technology to find lost dogs, when we know they are using it to track us. Fair play :) Thanks for discussing with me!
 
It’s about using tools, something humans have always done throughout history.
Sure, if in order to put nails into 2x4s your hammer needs to steal the work that every other hammer has done before. It's not a tool, it's a plagiarism machine.

Not to mention, if a hammer could build a house by itself, why would I credit the contractor who asked it to build the house?
 
Sure, if in order to put nails into 2x4s your hammer needs to steal the work that every other hammer has done before. It's not a tool, it's a plagiarism machine.

Not to mention, if a hammer could build a house by itself, why would I credit the contractor who asked it to build the house?
It’s a truly crap analogy, right?

It’s also motivated thinking.
 
The tech world is a lie. We have Ring advertising during the super bowl that they are using technology to find lost dogs, when we know they are using it to track us. Fair play :) Thanks for discussing with me!

<thread drift>

Yeah, I saw that ad, too. Total whitewash. If you think Ring is bad, check out deflock.org. It's an open secret Flock is in the tracking and personal data business.

</thread drift>
 
I’m just saying we need to be realistic and stop treating it like some kind of elite club. AI should be seen as a positive. It has opened up opportunities for people to put their thoughts into writing, something many wouldn’t have done otherwise. That’s a good thing, IMO. I don’t understand why it’s being treated as something negative or wrong.
 
Has anyone ever considered this? Maybe these posters who come into the AH arguing that there's nothing wrong with using AI are all alts set up by Laurel (may she live forever) to see who responds, and how. Anyone who disagrees gets added to the "nah, I don't think they're using AI" approval list, anyone who agrees gets a "keep an eye on this fucker" mark.
 
Back
Top