A Genuine Use Case for AI in Writing

When I was in school, many moons ago, electronic calculators were just coming out. There was a huge furor over the issue of students using them to "cheat" on their homework. Nowadays, we realize that they are just tools that let us work more efficiently.

AI generators are pretty much the same thing. They help you get the job done in less time by handling some of the grunt work. But you still need to understand the process yourself to get good results.
 
A lot of the discussion in this thread is people responding to the idea that "AI" is good for "research" and "authenticity".
Whereas in fact it's terrible at both.

What surprises me, frankly, is the number of fairly discerning writers who appear not to see the failures in sequential logic, the pile it on repetition, the non-sequiturs in AI fictional content (and that's setting aside the problem that it makes shit up when used as a search engine).

It's the same with the folk who rave over the visual AI imagery, and appear not to see miscounted fingers, the confusion between flesh and clothing, the distorted hands. It really makes me wonder how people read and see.

I'm not saying the "tools" won't be significantly better in several years, but right now these gadgets are like cutting wood with a hammer - not fit for purpose.
 
If you're young, get on top of AI and how to use it in as many ways as you can, it will be with you for the rest of your life in an important way. Think of it like a teenage girl/boyfriend, you can't quite figure out how it works, but you know it'll be a lot of fun finding out.

If you're elderly, think of it as your granddaughter, sent by a vengeful God to annoy you.
 
People still complain that Wikipedia is inaccurate, but it's widely used anyway. The problem is not with the platform or technology. It's with people treating a single source of information as oracular.
 
Whereas in fact it's terrible at both.

What surprises me, frankly, is the number of fairly discerning writers who appear not to see the failures in sequential logic, the pile it on repetition, the non-sequiturs in AI fictional content (and that's setting aside the problem that it makes shit up when used as a search engine).

It's the same with the folk who rave over the visual AI imagery, and appear not to see miscounted fingers, the confusion between flesh and clothing, the distorted hands. It really makes me wonder how people read and see.

I'm not saying the "tools" won't be significantly better in several years, but right now these gadgets are like cutting wood with a hammer - not fit for purpose.


It's a tool. Like any tool, it has its uses.
I've only recently discovered generative AI. I knew about it, but haven't tried it until about a month ago.
I haven't got much out of it. Then again, I haven't put much effort into learning how to use it. I have not spent much time playing with it.
 
People still complain that Wikipedia is inaccurate, but it's widely used anyway. The problem is not with the platform or technology. It's with people treating a single source of information as oracular.

That certainly is a problem, but it's not the whole problem.

On Wiki, editors are expected to provide sources so that readers don't need to take the article on trust. If they don't, the material can be taken down. If somebody else reads the article and thinks "huh that seems dodgy", they can go check out the sources and if the sources don't check out, they can raise a content dispute. If an editor has a history of adding bogus information they can be banned, and the edit history of an article is transparent. It's by no means perfect, I could go for hours on the cultural problems and blind spots of Wikipedia, but it's vastly better than generative AI.

And if Wikipedia doesn't have an article about something, then it doesn't have an article. Generative AI is almost incapable of recognising when it doesn't know enough to be answering the question:

Screenshot 2024-02-18 at 1.30.19 pm.png
 
That certainly is a problem, but it's not the whole problem.

On Wiki, editors are expected to provide sources so that readers don't need to take the article on trust. If they don't, the material can be taken down. If somebody else reads the article and thinks "huh that seems dodgy", they can go check out the sources and if the sources don't check out, they can raise a content dispute. If an editor has a history of adding bogus information they can be banned, and the edit history of an article is transparent. It's by no means perfect, I could go for hours on the cultural problems and blind spots of Wikipedia, but it's vastly better than generative AI.

And if Wikipedia doesn't have an article about something, then it doesn't have an article. Generative AI is almost incapable of recognising when it doesn't know enough to be answering the question:
That reads pretty much how two-degree graduates with their first policy job, but with not one iota of practical experience, write government policy. Sigh.
 
The last few months I've had people suggest to do what you're doing, feed everything in and see if I get something back that can show me a path out of the problems I'm facing
This isn't what I've been doing.

Maybe I wasn't clear enough, I seem to remember being quite tired. In any case, all I've been doing is teasing words of affirmation out of the AI. I ask it a question I already know the answer to about my own characters then when it agrees with me, it lets me externalise my thoughts because it tricks my brain into thinking I'm holding a conversation with someone.

I would never recommend using it for ideas or guidance.
 
The AI cannot associate
The AI cannot associate
Well put, and that's a more accurate description than "hallucinate." In a person, hallucinations are the mind playing tricks on you, confusing tangible reality and internal fantasy, largely due to a neurological defect. In an a program like ChatGPT, the problem isn't that the program can't tell real from fake, it's that ultimately the program doesn't know real and fake exist independently of one another. It's just a slightly better algorithm than what existed 5 years ago - and do mean slightly - with a bigger database of information to search.

Which is one of the many reasons I really wish people would STOP using the term AI at all. It's continuing this false equivalency between these programs and actual artificial intelligence. And there is no universe in which large-language model (LLM) software, or anything else which exists today, qualifies as artificial intelligence. And no, you can't add the word "generative" to the phrase "artificial intelligence" and make it okay. It's a lie, every bit as much as someone selling a knock-off Segway and calling it a "hoverboard." If it's not hovering above the ground, it's not a fucking hoverboard. We should exclusively be calling these programs LLMs or another (accurate) term.

Real artificial intelligence is a program that can think, can reason, can genuinely evaluate a response from a person or a huge database of text, and tell you if it's accurate, if it's funny, if it's insightful, etc. An AI is a computer that can think with at least close to the complexity and nuance of a human brain. And we are still decades, if not centuries, away from anything approximating legitimate AI. Thus, calling this crap today AI is an absurd misunderstanding of the term and what it actually means, or it's people who do understand but take no care to stop perpetuating this misuse of the term - which is arguably worse. So much worse than people who use "coincidence" and "irony" interchangeably, or who use the phrase "witch hunt" in situations where it's the puritanical pricks being accused of wrong-doing.

I agree with those who say the OPs method isn't about generating the story, thus it's not the same thing as using these programs to actually steal the work of others. That said, as long as these LLM program continue to bastardize the name of AI, while also scraping the internet for text with no real regard for whether or not the author of any text is okay with that, I will not use them for any reason. I've even stopped doing beta-test projects for any software calling itself AI. And given my financial situation right now (waiting on SSD) that's tricky, because it seems wannabe-AI is about 50% of testing gigs right now, not to mention the writing jobs that involve training such software to be more accurate. I'm not saying that to judge people who are using these tools; the ethics of use-cases like the op's are not as clear-cut as that. It's simply a choice that makes me feel crappy, in ways I am not prepared to overlook.
 
Pretty much. Like those constant conversations you have in the kitchen with our cat to keep away the voices in your head. :sneaky:
Dude, you should just let the voices in. That's the way I've written dialogue for years.

One of the greatest things about finally living alone, after 45 years of living in a multi-generational household, is that I can 'talk to myself' whenever the hell I want. Also, that any food I leave in the kitchen is still there when I want it. :D
 
That reads pretty much how two-degree graduates with their first policy job, but with not one iota of practical experience, write government policy. Sigh.
Considering the breadth of experience of Wikipedia editors (some of whom are experienced public policy writers) and the research studies demonstrating how effective this methodology is at not just removing crap from Wiki articles but also dissuading disruptive editors from continuing to target Wikipedia for vandalism, your response sounds more than a little pompous and condescending.
 
Considering the breadth of experience of Wikipedia editors (some of whom are experienced public policy writers) and the research studies demonstrating how effective this methodology is at not just removing crap from Wiki articles but also dissuading disruptive editors from continuing to target Wikipedia for vandalism, your response sounds more than a little pompous and condescending.
@ElectricBlue can clarify if I've misunderstood, but I think his derision there was directed at the GPT conversation I'd screenshotted, not at Wiki?
 
Real artificial intelligence is a program that can think, can reason, can genuinely evaluate a response from a person or a huge database of text, and tell you if it's accurate, if it's funny, if it's insightful, etc. An AI is a computer that can think with at least close to the complexity and nuance of a human brain. And we are still decades, if not centuries, away from anything approximating legitimate AI. Thus, calling this crap today AI is an absurd misunderstanding of the term and what it actually means, or it's people who do understand but take no care to stop perpetuating this misuse of the term - which is arguably worse. So much worse than people who use "coincidence" and "irony" interchangeably, or who use the phrase "witch hunt" in situations where it's the puritanical pricks being accused of wrong-doing.

Worse than people who confidently proclaim things that are completely wrong because they have no clue what they're talking about? (Tip: Look up the definition and history of Artificial Intelligence.)
 
Worse than people who confidently proclaim things that are completely wrong because they have no clue what they're talking about? (Tip: Look up the definition and history of Artificial Intelligence.)
I'm very familiar with both the definitions and the history, and McCarthy's own statements about AI are the basis for the wider understanding that it is not merely the synthesis of human intelligence but creating a functionally equivalent (or greater) version thereof - making everything calling itself AI today a laughable farce of actual artificial intelligence.

I do not come to such conclusions without doing the research, regardless of the topic. Thus, if you think I have no clue what I'm talking about, I suggest that it is you that needs to do more research.
 
I'm very familiar with both the definitions and the history, and McCarthy's own statements about AI are the basis for the wider understanding that it is not merely the synthesis of human intelligence but creating a functionally equivalent (or greater) version thereof - making everything calling itself AI today a laughable farce of actual artificial intelligence.

I do not come to such conclusions without doing the research, regardless of the topic. Thus, if you think I have no clue what I'm talking about, I suggest that it is you that needs to do more research.

Nonsense. "The synthesis of human intelligence but creating a functionally equivalent (or greater) version thereof" have always been among the goals of Artificial Intelligence research, often under the label "Artificial General Intelligence" (AGI), but the field includes all sorts of much more limited work, and has done so since before McCarthy's Dartmouth workshop (e.g. early AI programs to play games, logic solvers, expert systems, etc.). There is now an 80-year history of people working in AI, and they've been producing AI systems for almost as long.

The tendency for people to deny that anything AI accomplishes is actually AI is such a long-standing frustration in the field that it has its own name and Wikipedia page: "The AI Effect"

As McCarthy once put it: "As soon as it works, no one calls it AI anymore."
 
Back
Top