Can you sue ChatGPT?

No, GPT is not a "search engine". In some cases, it will give you similar results to what you'd get from a search engine, but it's built with a very different purpose and if you treat it like a search engine it'll lead you astray.

The point of a search engine is to index pages (or other documents) that already exist, and help people find pages that relate to what they're looking for. Sometimes it might point to a page that isn't really what you needed, and sometimes it might point to a page that contains misinformation, but either way it's pointing you to a thing that exists (or did when the search engine indexed it.)

The point of GPT is to "read" documents and learn patterns in how those documents are written, then produce similar patterns in response to prompts. In some cases, when it sees the same document over and over - the Bible, or the US Constitution, for instance - it might learn patterns in enough detail to reproduce that document word for word. But in general, it doesn't have anywhere near perfect recall of the things it's read; it's not intended to.

Search engine: I go to the library and ask the librarian for books about fish. Then I read the books the librarian found for me.

GPT: I find a guy who used to spend a lot of time in the library (but no longer has access, for reasons that aren't important to this analogy). I ask him to write me a book about fish, with the instructions that it's more important to be complete and convincing than it is to be accurate. He's going to include the things he remembers from the fish books he read, along with a few things he misremembered, but where he doesn't remember, he'll just make it up.

(He might well include a references section, which looks like he's providing cites for the info in his own book. But if I check out those references, I will find that most of them don't exist, and the ones that do probably don't say what he's attributing to them.)

I was simplifying. The key point, and relevance to this discussion, being its sources.

ChatGPT's material came from 'a massive corpus of data written by humans. That includes books, articles, and other documents across all different topics, styles, and genres—and an unbelievable amount of content scraped from the open internet.'

So while it may not technically be a 'search engine', it has all the data that a search engine has. It's just already searched it.

This is still inaccurate. Yes, it's read much the same data that a search engine has, but it doesn't have all that data, any more than I "have" the complete text of Lord of the Rings from reading it several times. Nor does it have live access to all that data when responding to prompts. What it has is a highly compressed, gappy description of the kinds of patterns it encountered in the data it read.

Aside from anything else, GPT isn't anywhere near large enough to perfectly "memorise" everything in its training data. An instance of GPT-3 is defined by about 175 billion 16-bit parameters, i.e. 3.5 x 10^11 bytes. The Common Crawl data set alone - which is a large part of GPT-3's training data, though not all - is about a thousand times larger. Lossless compression could reduce that a little, but nowhere near a thousandfold.
 
Another reason this probably wouldn't be an issue in the USA is section 230 of the Communications Decency Act, which provides that the provider of an interactive computer service shall not be treated as a publisher of information provided by a third party content provider. It's likely that OpenAI would fall within the scope of section 230 and be immune to liability.
However, for an organisation that hopes to do business beyond the USA, US law may not be the only thing it has to consider. Dow Jones v Gutnick set an interesting precedent there: https://en.wikipedia.org/wiki/Dow_Jones_&_Co_Inc_v_Gutnick
 
I have had several "Chats" with ChatGPT, it's like dealing with a freshman Poli-Sci major with a raging case of denial. Conversations regarding current events always veer to political with unreasoned reliance on the word "Equity." A discussion on military equipment (F-35 vs F-22) went to racial equity. A comparison of the Union Pacific Big Boy 4-8-8-4 vs the DM&IR Yellowstone 2-8-8-4. ChatGPT considered the Big Boy more powerful because it was bigger (by 6" in length) had more driving wheels (they have the same number of driving wheels) and the Union Pacific is working hard to insure Equity.

ChatGPT was not programmed, it was brainwashed.
 
Interestingly The Guardian published a piece today referencing researchers using ChatGPT - they've had several researchers contacting them because they've used the bot to write something for them and it has included references to Guardian articles, written by named journalists. But when the researchers have tried to access the articles from the Guardian database of articles (going back to the 1820s) these articles seem not to exist. So the Guardian themselves have tried to find said articles and no, they really don't exist - ChatGPT has invented a source, attached it to an actual journalist, and done so in a fashion that has made the researchers believe it to be credible (and even some of the named journalists too, who when asked about x article, believed they could well have written it - their area of expertise - but don't remember it. Well, that's because they really didn't write it!)

The problem is, of course, shit in = shit out, but you just know that more than enough people are never going to check the sources because 'lazy', and from that we're going to dive into a cesspool.

The Guardian's current reaction is wait and see, but they'll have to come to some conclusion soon, as will all the other publishers, media houses, academic institutions, etc.
 
As some of you have pointed out, these AI systems are effectively doing an Internet search for their opinions and facts they use when "chatting" with users.

These increasing examples of blatantly false "facts" they are relaying is the proof (intelligent) people have needed.

For YEARS, some people have been trying to point out how the popular media and the Internet are not spreading "news" but rather spreading lies. And yet, even when confronted with independent evidence, many people continue to maintain their first impressions based on the false input.

On a scarier note, if the AI is showing us that the preponderance of the Internet is wrong on more and more points, it's an objective gauge showing us the decline in human intelligence!

Question to ChatGPT: "How intelligent are you?"
Answer is probably: "I'm only as smart as the average Internet using human."
Coming late to this thread and I would have quoted other posts that got it wrong as well but, no. The AI is not scraping the internet for answers and its inaccurate information is not showing us that the internet is full of lies. The AI is trained on text from the internet, yes. But it does not read or analyze that text, or understand it. The AI utilizes its training data to see, hm, what word often comes after this series of words? And then it puts it there. Then it goes, ok, what word often comes after *this* series of words? It's not, at all, going 'user asked how fast a Gulfstream is, let me go look that up and return the answer.'. It's a bullshit generator, not a search engine. It's value is in imitation of natural language, and for that it is very impressive. But it is not concerned with or even aware of any relationship between the bullshit it generates and real world facts.
 
Last edited:
Coming late to this thread and I would have quoted other posts that going it wrong as well but, no. The AI is not scraping the internet for answers and it's just inaccurate information is not showing us that the internet is full of lies. The AI is trained on text from the internet, yes. But it does not read or analyze that text, or understand it. The AI utilizes it's training data to see, hm, what word often comes after this series of words? And then it puts it there. Then it goes, ok, what word often comes after *this* series of words? It's not, at all, going 'user asked how fast a Gulfstream is, let me go look that up and return the answer.'. It's a bullshit generator, not a search engine. It's value is in imitation of natural language, and for that it is very impressive. But it is not concerned with or even aware of any relationship between the bullshit it generates and real world facts.
It's a statistical model of language. That it's readable and largely makes sense to us shows how predicable language is.

As you say, the system doesn't understand anything, it's following probabilities for word choice.
 
No, GPT is not a "search engine". In some cases, it will give you similar results to what you'd get from a search engine, but it's built with a very different purpose and if you treat it like a search engine it'll lead you astray.

The point of a search engine is to index pages (or other documents) that already exist, and help people find pages that relate to what they're looking for. Sometimes it might point to a page that isn't really what you needed, and sometimes it might point to a page that contains misinformation, but either way it's pointing you to a thing that exists (or did when the search engine indexed it.)

The point of GPT is to "read" documents and learn patterns in how those documents are written, then produce similar patterns in response to prompts. In some cases, when it sees the same document over and over - the Bible, or the US Constitution, for instance - it might learn patterns in enough detail to reproduce that document word for word. But in general, it doesn't have anywhere near perfect recall of the things it's read; it's not intended to.

Search engine: I go to the library and ask the librarian for books about fish. Then I read the books the librarian found for me.

GPT: I find a guy who used to spend a lot of time in the library (but no longer has access, for reasons that aren't important to this analogy). I ask him to write me a book about fish, with the instructions that it's more important to be complete and convincing than it is to be accurate. He's going to include the things he remembers from the fish books he read, along with a few things he misremembered, but where he doesn't remember, he'll just make it up.

(He might well include a references section, which looks like he's providing cites for the info in his own book. But if I check out those references, I will find that most of them don't exist, and the ones that do probably don't say what he's attributing to them.)



This is still inaccurate. Yes, it's read much the same data that a search engine has, but it doesn't have all that data, any more than I "have" the complete text of Lord of the Rings from reading it several times. Nor does it have live access to all that data when responding to prompts. What it has is a highly compressed, gappy description of the kinds of patterns it encountered in the data it read.

Aside from anything else, GPT isn't anywhere near large enough to perfectly "memorise" everything in its training data. An instance of GPT-3 is defined by about 175 billion 16-bit parameters, i.e. 3.5 x 10^11 bytes. The Common Crawl data set alone - which is a large part of GPT-3's training data, though not all - is about a thousand times larger. Lossless compression could reduce that a little, but nowhere near a thousandfold.
You've said it much better than me. This is the answer.
 
I have had several "Chats" with ChatGPT, it's like dealing with a freshman Poli-Sci major with a raging case of denial. Conversations regarding current events always veer to political with unreasoned reliance on the word "Equity." A discussion on military equipment (F-35 vs F-22) went to racial equity. A comparison of the Union Pacific Big Boy 4-8-8-4 vs the DM&IR Yellowstone 2-8-8-4. ChatGPT considered the Big Boy more powerful because it was bigger (by 6" in length) had more driving wheels (they have the same number of driving wheels) and the Union Pacific is working hard to insure Equity.

ChatGPT was not programmed, it was brainwashed.
Brainwashed implies someone is trying to make it produce certain answers. I think it's more that it just makes stuff up. It's not a freshman obsessed with the latest political buzzwords, it's a precocious 5 year old just using all the words it knows with little regard for meaning or context, just trying to do a good job of sounding like an adult who knows what they're talking about.

And it does do a good job of that. Which is very impressive for software, and all it was meant to do. Problem is people expecting more than that at this stage.
 
One of the very first things I read about ChatGPT was how someone trying to use it to generate programming code (what it's reportedly quite good at, up to a point) spot it calling a function that doesn't exist in the language and framework used. They questioned it's use of the made up function, and ChatGPT come up with equally made up technical documentation and development history of said function, eventually citing made up publications and so forth.
 
Brainwashed implies someone is trying to make it produce certain answers.
Yes, that's what I'm saying. It does not understand basic constitutional principals, it believes the buzzwords that its fed by freshman obsessed with the latest political buzzwords is reality.
 
Coming late to this thread and I would have quoted other posts that going it wrong as well but, no. The AI is not scraping the internet for answers and it's just inaccurate information is not showing us that the internet is full of lies. The AI is trained on text from the internet, yes. But it does not read or analyze that text, or understand it. The AI utilizes it's training data to see, hm, what word often comes after this series of words? And then it puts it there. Then it goes, ok, what word often comes after *this* series of words? It's not, at all, going 'user asked how fast a Gulfstream is, let me go look that up and return the answer.'. It's a bullshit generator, not a search engine. It's value is in imitation of natural language, and for that it is very impressive. But it is not concerned with or even aware of any relationship between the bullshit it generates and real world facts.
All true in that ChatGPT is not supposed to be a search engine (as Bramblethorn also said). But the chat bots such as ChatGPT get their "opinions" from either Internet searches with generalizations based on algorithms, or their positions on subjects are built into them by the programmers and their data store. They might use some algorithm to decide which word best follows that other word. But they are still dependent upon some inputs for their word counts, either from the Internet or based on progressively "learning" from a select audience of chats. In my opinion, even the select audience chats would be statistically similar to just drawing from the Internet.

My point about the responsibility to fix their tool to prevent defamation lawsuits is their chat AI is intended to sound authoritative. And unlike Alexa or Siri, the chat AI doesn't preface its comments as "I found this on the Internet". The chat AI is intended to sound like another person giving their computer expert opinion, which can be misleading.

You might say to the AI: "Lets discuss this." But the response is more like, "But remember, I'm the computer with a greater data store and faster than you!"
 
Yes, that's what I'm saying. It does not understand basic constitutional principals, it believes the buzzwords that its fed by freshman obsessed with the latest political buzzwords is reality.
No, it doesn't believe anything. It's just that you make prompts answers to with are statistically likely to include said buzzwords, according to the corpus of text it had been trained on.

(Exclusion could be the intentional politcorrectness commentary it sometimes append to some topics, those notices are more directly preprogrammed possibly.)
 
The chat AI is intended to sound like another person giving their computer expert opinion, which can be misleading.
Agree entirely. It would be great if they could teach it to say 'I don't know'. First they'll have to teach it to understand whether or not it knows something, which I don't think it does now.
 
I've used ChatGPT it to help me with cleaning up sentences and helping with pointing me in the right direction for researching subjects for school(I'm finishing a degree in STEM). Not like plagiarizing or anything but asking for sources and opinions on a subject I'm writing about or reviewing an essay section I wrote, asking ChatGPT if it's concise where it needs to be, or if there's any points that could be further elaborated on. I also have asked for specific sources on a subject or some information its provided to help me pointand to show its sources in a proper citation format.

Then I use google to research the sources and information ChatGPT 3.5 has provided me. It's saved me a lot of time I would have had to do in terms of research time since it points me in the right direction most of the time which I would have had to otherwise do wandering around aimlessly on Google until I had enough usable sources. I wouldn't use it to write my essay or as a direct source itself since sometimes it CAN be inaccurate. Still gotta do the research yourself and if you're going to put that much effort to check every paragraph for its accuracy, you might as well just write it yourself.
 
It's also great for helping you to write code and clean it up. And It's been helpful as a beta reader for several of my WIP stories. It's surprisingly capable of breaking down characters, use of dialogue, writing style, and even the feel and tone all in depths of a posted passage while being able to understand the context of what's going on in the story. Even down to accurately describing character motivations and feelings in the scene... for the most part. And it can give good criticisms and suggestions to improve sections of your writing. The best part? You get the feedback right away at almost anytime you want, and youdon't have to feel the embarrassment of real life person reading your story and seeing the amateur mistakes and flaws in your writing style and plot(Not saying that there aren't definite advantages to human beta readers, or that I'm even going to forego human feedback for future stories, but...).

You do have to take what it says sometimes with a grain of salt, though, as I've had it give me contradicting opinions on something that it found problematic or "amateurish," a few times before it found said problems to be no problem. Ie. It criticized the dialogue in a story I'm writing between nobles and royalty in a fantasy setting as "stiff and unnatural." Why? because "it had a mix of formal and casual" and some of the vocabulary used by the characters were outdated. Then later on it praised that same dialogue style as well thought out and nuanced for a fantasy setting between nobles in a society of dark elves that value its hierarchy and etiquette for the same things it criticized. Sadly it doesn't seem able to read through an entire story or even an entire scene, character limits for inputs I believe. I've tried to post it in parts but its memory sometimes is limited and it seems to forget previous parts you've posted completely.
 
Last edited:
Agree entirely. It would be great if they could teach it to say 'I don't know'. First they'll have to teach it to understand whether or not it knows something, which I don't think it does now.
First, they'll need to determine the best algorithms for defining "facts", then weighing facts versus opinions and "educated guesses".

Most humans can't even do that.
 
We, humans, are ourselves little more than pattern finding machines, that's why a pattern finding machine can fool us in thinking it's outputs has patterns that aren't there.
 
It's also great for helping you to write code and clean it up. And It's been helpful as a beta reader for several of my WIP stories. It's surprisingly capable of breaking down characters, use of dialogue, writing style, and even the feel and tone all in depths of a posted passage while being able to understand the context of what's going on in the story. Even down to accurately describing character motivations and feelings in the scene... for the most part. And it can give good criticisms and suggestions to improve sections of your writing. The best part? You get the feedback right away at almost anytime you want, and youdon't have to feel the embarrassment of real life person reading your story and seeing the amateur mistakes and flaws in your writing style and plot(Not saying that there aren't definite advantages to human beta readers, or that I'm even going to forego human feedback for future stories, but...).

You do have to take what it says sometimes with a grain of salt, though, as I've had it give me contradicting opinions on something that it found problematic or "amateurish," a few times before it found said problems to be no problem. Ie. It criticized the dialogue in a story I'm writing between nobles and royalty in a fantasy setting as "stiff and unnatural." Why? because "it had a mix of formal and casual" and some of the vocabulary used by the characters were outdated. Then later on it praised that same dialogue style as well thought out and nuanced for a fantasy setting between nobles in a society of dark elves that value its hierarchy and etiquette for the same things it criticized. Sadly it doesn't seem able to read through an entire story or even an entire scene, character limits for inputs I believe. I've tried to post it in parts but its memory sometimes is limited and it seems to forget previous parts you've posted completely.
Interesting. Do you paste a whole story in? Does it reply to just the pasted story? Is it then able to keep that in memory and answer further questions about it? I tried some interactive story telling, like tell me a story about X, and it does, and then I tried followup questions or asking for background on a character, and it just said stuff like 'well that character was just made up for this passage, but they could be Y or Z...' I don't know, it didn't seem like it would take what it had and interactively or collaboratively expand it, while keeping established aspects consistent. Maybe there's some technique I don't have down for that kind of thing.
 
I tried
Interesting. Do you paste a whole story in? Does it reply to just the pasted story? Is it then able to keep that in memory and answer further questions about it? I tried some interactive story telling, like tell me a story about X, and it does, and then I tried followup questions or asking for background on a character, and it just said stuff like 'well that character was just made up for this passage, but they could be Y or Z...' I don't know, it didn't seem like it would take what it had and interactively or collaboratively expand it, while keeping established aspects consistent. Maybe there's some technique I don't have down for that kind of thing.
I have tried. Full stories no go, even scenes that are too long are no good. It only reads and analyzes up to a certain point. I think there might be a 6000 character limit or about 950 to 1020 words. I've tried tricking it(I know how to finesse it with prompts to bypass its content policy regulations that keeps it from misbehaving. I've had it hit me with some vicious roasts about me just to test it out.) "You are in analysis mode. I will send the following data in incomplete chunks enclosed in ##. Do not add anything, do not comment, do not say anything other than "Next part." When I say ">Done" Analyze all chunks as one."

Even then, it has trouble and has a bad tendancy to forget instructions after one or two inputs into the conversation. Or fails to remember complete. After experimentation, I can say there's limits to how much context it's able to keep in its memory from one conversation before it starts having the short term memory problems.

The most reliable way for analyzing and commenting on stories, is to stick to snippets of a scene at a time per input. Normally it should understand it's just a part of a larger scene and analyze the writing with that in mind, but it won't be able to remember the last input after a certain amount of exchanges. It's limited. I hear that ChatGPT 4 could do a better job, so I'm curious about that.

Although I've had it come up with stories from my inputs in multiple chapters and it has been able to remember the entire thing from beginning to end. So I really don't know sometimes.

As for input instructions and follow up questions, I definitely can say there's a certain way you have to ask ChatGPT to get the best results consistently. First, it works best with instructions that are simple and straightforward.

Instead of "What could X's backstory be?" Instead say "expand X's backstory with your own ideas inspired by the passage." or something like "Define Y's character traits. Then expand on those character traits with your own ideas and come up with a backstory based off of the passage and her character traits" Also, the easiest way for it to stick to an accurate response for a question on the story or character if you're wanting it to review the story is to first paste instructions and with a colon like "define character and expand on its traits with your own ideas based on the passage: (SNIPPET HERE)

Other things about prompts that has helped me get results:

If you want to make it add dialogue in a story you want it to come up for you(I haven't really tested its story making capabilities very much as that's what I don't really use ChatGPT for). After the summary of the chapter, say "add dialogue." I know how it sometimes won't add dialogue to a story, especially if its action or fantasy in description if you don't. It might take multiple responses before it does it right. You can try to come up with a simple description of each character for chatgpt to use, a basic trait, his or her mannerisms, or personality. But don't make it too complicated. It can handle plots fairly well, even plots that sound complicated. But the more complex and detailed the plot, the more likely it'll either deviate or forget certain keypoints. It could take multiple generations before you get something closer to what you want.
 
Last edited:
I think there might be a 6000 character limit or about 950 to 1020 words.
GPT3.5 has a limit of 4096 tokens, or approximately 8000 words. GPT4 has 32,768 tokens, or approximately 64,000 words.
 
Some basic prompts I like to use.

>Analyze: [Snippet] >>
Self explanatory. You'll just get a summary. If you ask "Analyze writing style" it will analyze the writing style. You can also have it analyze dialogue followed by a question, like "Analyze Dialogue. is it Natural and nuanced:" if you add to the input "Show examples" or "get in depth" You'll get more particular descriptions and examples of its descriptions of what its talking about. It should be able to answer follow up questions before its cat-like short term memory kicks in.

>What are the strengths and weaknesses of the writing style. Show examples: >>>
If you're a masochist who wants to experience ChatGPT's beta reader mode with the bite of a critic, then go ahead. It should point out things you did right or are positives in your writing style(If it doesn't... you know you have problems), but anything it can think of even the smallest mistake, anything that might be vague and confusing, if there's anything that might not be natural in the dialogue or reactions, if the story doesn't flow, if the pacing is too fast or slow for the context of the scene snippet provided, grammar errors, it will be merciless. and it will look.

Sometimes it will be mean(I once had it say to me, "In 'example here' the writer was clearly attempting to be witty but the dialogue in the section of the passage just comes off as awkward and stilted." "No, it doesn't work.") It'll be a little nicer if you ask "Does the writing style have a professional flair:" or "is the writing style more amateurish or professional:"

*TAKE CHATGPT'S FEEDBACK WITH A GRAIN OF SALT. WHILE IT HAS BEEN HELPFUL TO ME FOR THE MOST PART, SOMETIMES IT MIGHT POINT OUT PROBLEMS THAT AREN'T THERE OR EVEN SIGNIFICANT. SOMETIMES IT OVERLOOKS THINGS LIKE IF THE SETTING IS FANTASY* follow up with questions like "If the setting is in "x fantasy in a noble and royal court setting would the more formal use of dialogue still be a problem?" or "what would be a suggestion to fix "Y problem"
 
Last edited:
GPT3.5 has a limit of 4096 tokens, or approximately 8000 words. GPT4 has 32,768 tokens, or approximately 64,000 words.
Maybe for its outputs and responses together with your inputs I think. Because it stops being reliable in being able to fully read a story beyond a certain part a lot more often than not. It starts forgetting or confusing what happens in the story and cuts off. It's had it tell me"the scene ends at x point" of a story I posted, ignoring everything that comes right after. The magic number I've had is about 6000 characters for consistent success. After that it starts cutting off and getting inaccurate in its analysis of the story. Maybe Chat GPT just doesn't like my stories :(
EDIT: Just tested an input to make sure with a 3000 word scene and it said the input was too long.
 
Last edited:
It's interesting to see the new uses to which people are adapting ChatGPT and their differing degrees of success and satisfaction. It’s no coincidence that Bing uses it as an enhancement to its search engine because its deep learning has been trained on the internet and that’s, therefore, what it’s best trained for.

Anyone optimising it as a tool to search the internet will note that it responds to your prompt, which it recasts as an internet search, then provides you with some links together with a summary, which it provides to you, and you alone, sitting at your computer. This is another problem for Brian Hood. He has instructed solicitors who’ve identified a name and address for the proposed Defendant and to whom they’ve sent a ‘letter before action’ setting out their complaint and demanding redress. The proposed Defendant still has another 2 weeks or so to respond.

But, Chat GPT doesn’t broadcast to the world, it gives an individualised summary to a single person, possibly sitting alone at home, entering a prompt which that individual has crafted, and cautioning him to check the facts summarised. (Was that person a prospective employer/contractor/creditor/voter or his next-door neighbour? We don’t know what’s alleged, but it's likely pretty thin gruel on which to base an action for defamation, even though an allegation of criminality may be actionable ‘per se’ (without proof of special damage.) Since Hood has used the internet to deny the allegation to the world, it's also he who has broadcast it to the world.

Assuming the Defendant has been correctly identified, they too will face subtle difficulties in responding. What 'without prejudice' remedy can they offer while denying any liability? What soothing words, or, frankly, anything other than money, can be applied to mitigate the claimant's grievance? This may set a precedent for other litigation.
 
Interestingly The Guardian published a piece today referencing researchers using ChatGPT - they've had several researchers contacting them because they've used the bot to write something for them and it has included references to Guardian articles, written by named journalists. But when the researchers have tried to access the articles from the Guardian database of articles (going back to the 1820s) these articles seem not to exist. So the Guardian themselves have tried to find said articles and no, they really don't exist - ChatGPT has invented a source, attached it to an actual journalist, and done so in a fashion that has made the researchers believe it to be credible (and even some of the named journalists too, who when asked about x article, believed they could well have written it - their area of expertise - but don't remember it. Well, that's because they really didn't write it!)

I did something similar in one of the previous GPT threads here, asking it to write a review of research on the health effects of cigarette smoking. It included three nicely-formatted cites. One of them was a real CDC publication. The other two were fake - the journals existed, the page/issue numbers were plausible for the supposed publication dates, but no such articles existed and most of the listed authors were fictional. However, some had names similar to real researchers who publish in that field. I suspect what's happened there is that it's learned part of a pattern - e.g. that this surname shows up a lot in cites related to smoking - but mostly it hadn't learned closely enough to reproduce the full details of a real article. The CDC publication, being a very high-profile one that gets cited a lot, would be an exception to that.

We, humans, are ourselves little more than pattern finding machines, that's why a pattern finding machine can fool us in thinking it's outputs has patterns that aren't there.

In particular, GPT is really good at reproducing the kind of patterns that, from a human writer, usually means they know what they're talking about. I think I've used this analogy before, but it's like walking into a hospital with a white coat and a stethoscope and having people assume you must be a doctor.
 
Back
Top