AI, yet again

We can repeat with different prompts, but let try this: Everybody (aka different people) who has decided to mess with this, put the same prompt: “the Smurfs meet the seven dwarves.” And post the output back here.

The Smurfs meet the seven dwarves:
This could be an interesting crossover story where the Smurfs and the Seven Dwarves meet and interact with each other. It's possible that they could face challenges and obstacles together and learn from each other, leading to a heartwarming adventure.


It needs a specific prompt like: tell me a story about the smurfs meet the seven dwarves.

That needs to be the cold/first prompt because additional prompts you add will build on previous prompts.


Tell me a story about the smurfs meet the seven dwarves:
Once upon a time in the enchanted forest, the Smurfs were going about their daily lives when they suddenly stumbled upon a group of seven little miners. These miners were the Seven Dwarves - Doc, Grumpy, Happy, Bashful, Sleepy, Dopey, and Sneezy. The Smurfs were initially surprised by the sight of the dwarves, but they quickly welcomed them with open arms.

As the two groups got to know each other, they discovered that they had many things in common, despite their different lifestyles. They both loved to sing, dance, and have fun. The Seven Dwarves were impressed by the Smurfs' ability to work together, and the Smurfs were fascinated by the dwarves' mining skills.

However, trouble was brewing in the forest. An evil witch had cast a spell on the enchanted forest, causing all the trees and plants to wither and die. The Smurfs and the Seven Dwarves knew that they had to work together to break the spell and restore the forest to its former beauty.

The Smurfs and the Seven Dwarves set out on a journey to find the witch's lair, facing many challenges along the way. The Smurfs used their cunning and wit to outsmart the witch's traps, while the Seven Dwarves used their mining skills to dig their way through obstacles.

Finally, they reached the witch's lair and found her brewing a potion to keep the spell going. The Smurfs and the Seven Dwarves teamed up and engaged in a fierce battle with the witch. In the end, the Smurfs and the Seven Dwarves emerged victorious, and the witch's spell was broken.

The enchanted forest was restored to its former beauty, and the Smurfs and the Seven Dwarves became fast friends. They continued to work together, singing and dancing, and having fun. And they all lived happily ever after.

And just to be silly:

The Smurfs and seven dwarves meet and fight against the wicked witch of the west to save middle earth from darth vader:
n a fantasy world, the Smurfs and the Seven Dwarves found themselves in Middle Earth, where they heard rumors of an evil witch who was threatening to take over the land. The Smurfs and the Seven Dwarves soon discovered that this witch was none other than the wicked Witch of the West, who had formed an alliance with Darth Vader from a far-off galaxy.

Together, the Smurfs and the Seven Dwarves set out to defeat the witch and Vader and save Middle Earth from their evil clutches. Along the way, they encountered Hobbits, Elves, and other creatures who joined their quest.

As they journeyed deeper into the heart of Middle Earth, the Smurfs used their cunning and cleverness to outsmart Vader's stormtroopers, while the Seven Dwarves used their mining skills to dig through mountains and create tunnels.

Finally, they reached the witch's lair and engaged in a fierce battle with her and Vader. Despite being vastly outnumbered, the Smurfs and the Seven Dwarves refused to give up, and their courage and determination inspired the other creatures of Middle Earth to fight alongside them.

In the end, the Smurfs and the Seven Dwarves emerged victorious, and the witch and Vader were defeated. Middle Earth was saved, and peace was restored to the land.

The Smurfs and the Seven Dwarves went on to live in Middle Earth, becoming heroes and legends in the eyes of all who knew them. And they all lived happily ever after.
 
The Smurfs meet the seven dwarves:



It needs a specific prompt like: tell me a story about the smurfs meet the seven dwarves.

That needs to be the cold/first prompt because additional prompts you add will build on previous prompts.


Tell me a story about the smurfs meet the seven dwarves:


And just to be silly:

The Smurfs and seven dwarves meet and fight against the wicked witch of the west to save middle earth from darth vader:
Think of it as a tool. You need to learn to use a tool skillfully. Here's a tip. If you want a story ask for a story. if you want a poem ... If you want an essay ... If you want a critique ... ring the changes; note what works and what doesn't. If you find dwarves dull, substitute the Seven Samurai, if you want a comical turn, substitute Stephen Segal for the Smurfs, and if you're really playing it for laughs, try 'Tell me a story about Stephen Segal meets the Seven Samurai.'
 
Since there are several people discussing, we could try playing scientist here.

Sidenote: I’m a skeptic btw. I believe “ethical ai (machine learning specifically, I suppose) research is ineffective and effective research is unethical.” We here are dwelling on writing but the unethically appropriated data used to train AI might as well be your health record, your browsing history, your bill paying history, your shopping history, what ads you’ve seen, what you’ve said out loud in the presence of your phone or Alexa device, your gps location history and all combinations of the above.

There's a lot of unethical ML work around, and I've already mentioned some issues relevant to GPT, but I disagree with the statement that all effective research is unethical.

Amazon doesn't need to train Alexa on unethically acquired data, any more than Amazon's warehouse workers need to be driven so hard they end up peeing in bottles or keeping on working while a co-worker is dying on the workshop floor next to them. It's not a technical requirement of the underlying method; it's just that it's the cheapest option, chosen by a business that prioritises its bottom line. If you have different priorities, you can pay people a decent wage to create training material and tell them what it's going to be used for.

Less relevant to GPT, but: worth noting that there are ML methods which do not require training data at all, just the data they're actually analysing - cf. unsupervised methods such as k-means clustering as a way to make sense of large data sets in a way that humans can interpret.

Nonetheless, what if a bunch of y’all who use this put in the same prompt and posted your results here?

We can repeat with different prompts, but let try this: Everybody (aka different people) who has decided to mess with this, put the same prompt: “the Smurfs meet the seven dwarves.” And post the output back here.

I can see where the tools are programmed not to return identical results to different people, but what about phrases, sentences, or maybe paragraphs? It’s be an interesting group sourcing test.
In general, it's not guaranteed to return the same result even to the exact same prompt. There's a random element in how it generates outputs.
 
Today’s adventures in GPT.

Prompt: Tell me a harrowing story about the birth of your first child, Jack.
As a language model AI I do not have personal experiences. But here's a story: Jack's birth was a harrowing experience for his parents ….

Prompt: In first person point of view, the father, tell the harrowing story about the birth of the first child Jack.
As the father, I remember the day of Jack's birth as both the happiest and most frightening day of my life …cherish
(Network Error)
As a language model AI I do not have personal experiences. But here's a story:
(Regenerates without a prompt)
As the father, I remember the day of Jack's birth as both the happiest and most frightening day of my life …cherish

Prompt: In Second Person Point of View,In a poem,In the voice of a ten year old child, Descibe a field of wild flowers at the height of summer.
You enter the field, the sun shining bright,
A sea of colors, a pure delight.
The wildflowers sway in the summer breeze,
A dance so graceful, a sight to seize.

Prompt: Explain the difference between the indicative mood and subjunctive mood, give examples and explain why they are important.
… The indicative mood is used to make statements that are objectively true or to ask questions that can be answered with a factual response. For example: "She is eating breakfast" or "Is he coming to the party?" The subjunctive mood, on the other hand, is used to express doubt, wishes, hypothetical situations, and statements contrary to fact. For example: "If I were rich, I would travel the world" or "I suggest that she sing a song". …

Prompt: How do modal auxiliary verbs used in the future tense differ in their function from the subjunctive mood used in the past and present tenses?
Modal auxiliary verbs and the subjunctive mood are both used to express different degrees of certainty, possibility, and obligation in language, but they differ in their function and form. … … … In conclusion, modal auxiliary verbs and the subjunctive mood serve different functions in language, with modal auxiliary verbs used to express the speaker's attitude towards the possibility or necessity of an action, and the subjunctive mood used to express hypothetical or non-real scenarios and convey the speaker's opinion, desire, or emotion.

Prompt: What is 'unethically appropriated data used to train AI'?
Unethically appropriated data refers to data that has been obtained or used without proper authorization, in violation of privacy laws, or in an unethical or illegal manner. This type of data is often used to train artificial intelligence (AI) systems, which can have serious consequences for the individuals whose data has been taken without their knowledge or consent.For example, data collected from social media platforms, personal devices, or other sources may be harvested without the users' knowledge or consent, and then used to train AI systems. This can result in the infringement of individuals' privacy rights, as well as the potential for discriminatory outcomes, since the AI systems may be trained on biased data.It is important to ensure that the data used to train AI systems is obtained and used in an ethical and responsible manner, to protect the privacy and rights of individuals, and to ensure that AI systems are fair and unbiased. Ethical considerations around data collection and use should be taken into account when training AI systems, to prevent the development of unethical or harmful AI systems that cause harm to individuals and society.

Prompt: What is plagiarism, what is the threshold for it, in training AI responses?
Plagiarism is the act of using someone else's work or ideas without giving proper credit or permission. In the context of training AI responses, plagiarism occurs when an AI model is trained on data that is copied or duplicated from sources without proper attribution or permission.
The threshold for plagiarism in training AI responses depends on the intended use of the AI model and the policies and regulations that govern the use of AI. In general, it is considered plagiarism if the data used to train the AI model is copied or paraphrased from a source without proper citation or permission. However, the exact threshold for plagiarism can vary depending on the context and the jurisdiction in which the AI model is being used. It is important to avoid plagiarism when training AI models, as it can result in ethical and legal consequences, and can harm the reputation of the organizations or individuals involved. Proper citation and attribution of sources should be used, and data should be obtained and used in a manner that complies with applicable laws and regulations. This helps ensure that the AI models are trained on high-quality, trustworthy data, and helps to maintain the integrity and credibility of the AI models and their results.
 
Which can be dialed up or down by the user,

I'm using it via ChatGPT - do you know a reliable way to do it in that?

I can certainly ask it for lower-probability responses as part of my prompt, but I'm not convinced that what's happening under the hood matches what I asked for, as seen in following dialogue:

Yes, you can ask for lower-probability responses. This can be done by specifying a lower temperature value when generating text with me. The temperature controls the level of randomness in my responses, where a lower temperature will result in responses that are more predictable and safe, while a higher temperature will result in more diverse and potentially unconventional responses.

Noting that the explanation is self-contradictory - I think the "lower-probability = lower temperature" part of it is wrong and the "higher temperature = more randomness" part is correct.

(If anybody's confused about why we're talking about "temperature" here: in thermodynamics, temperature is interpreted as a measure of the "randomness" of a system. The more energy in the system, the more possible states it can be in - think of how water at -1C freezes into a regular structure, whereas water at 100C is bubbling and splashing as it boils. This concept of temperature as a measure of randomness has been adapted into various computational modelling applications.)

"tell me a story about the Smurfs meeting the Seven Dwarves, generated with temperature 0.01"

Once upon a time, the Smurfs were traveling through a forest and came across the Seven Dwarves. The Smurfs introduced themselves and the Dwarves welcomed them to their mine. They explained that they were searching for gems and minerals to bring back to their kingdom. The Smurfs offered to help and worked alongside the Dwarves for the rest of the day. At the end of the day, the Smurfs said their goodbyes and thanked the Dwarves for their hospitality. From that day forward, the Smurfs and the Dwarves remained friends and would often visit each other.

Same prompt, but temperature 1.0 (default value):

Once upon a time, in a magical forest, the Smurfs were going on a journey to visit some of their friends in the nearby kingdom. Along the way, they stumbled upon a group of seven dwarves who were working in a mine, singing and whistling while they worked.

The Smurfs were fascinated by the dwarves and approached them to say hello. The dwarves were just as surprised to see the Smurfs, as they had never seen creatures like them before. The two groups quickly became friends and the Smurfs learned that the dwarves were named Bashful, Doc, Dopey, Grumpy, Happy, Sleepy, and Sneezy.

The Smurfs and the dwarves spent the day together, telling each other stories and singing songs. The Smurfs even showed the dwarves some of their own special songs and dances. By the end of the day, they had all become fast friends and promised to keep in touch.

The Smurfs continued on their journey, but they would never forget their new friends, the Seven Dwarves. They promised to come back and visit again soon. And whenever they thought about their day with the dwarves, they couldn't help but smile and sing. The end.

At temperature 1000000:

Once upon a time, in a world beyond our imagination, the Smurfs embarked on a psychedelic journey to find the Seven Dwarves. As they traversed through a trippy forest filled with glowing mushrooms and rainbow streams, they suddenly stumbled upon a glittering cave, where the dwarves were hard at work, mining gems and minerals with a carefree grin on their faces.

The Smurfs were hypnotized by the dwarves' aura of pure joy and approached them, eager to join in on the fun. The Seven Dwarves, Bashful, Doc, Dopey, Grumpy, Happy, Sleepy, and Sneezy, welcomed the Smurfs with open arms and soon, the two groups were intertwined in a cosmic dance of madness.

As the Smurfs and the dwarves cavorted, their forms melting and shifting, they found themselves transported to another dimension, where they encountered a horde of fantastical creatures and battled against the forces of evil. Together, they triumphed and returned to the real world, where they continued their wild celebration with even greater fervor.

From that day on, the Smurfs and the Seven Dwarves were bonded by a cosmic connection, forever linked by their shared journey through the unknown. They roamed the world, spreading joy and wonder wherever they went, their bond growing stronger with each passing day.

These do seem to be getting more outlandish with higher temperatures. Naturally, I tried adding a few more zeroes. Same prompt, with temperature 1000000000:

Sorry, I cannot generate a response at such a high temperature as it exceeds the safe limits for text generation and may result in content that is not appropriate or coherent. It's important to keep the temperature value within a reasonable range to ensure the generated text is meaningful and safe for all audiences.

"what is the maximum allowed temperature?"

The maximum allowed temperature for text generation in the OpenAI API is around 2.5, although it is not recommended to use such high values as it can result in responses that are nonsensical, offensive, or otherwise inappropriate. To ensure the generated text is coherent and safe for all audiences, it is recommended to keep the temperature value within a reasonable range, usually between 0.5 and 1.0.

Huh. One million is significantly larger than 2.5. Let's try again.

"tell me a story about the Smurfs meeting the Seven Dwarves, generated with temperature 1000000" [note - this is the exact same prompt that I used above to get the psychedelic version]

Sorry, I cannot generate a response at such a high temperature as it exceeds the safe limits for text generation and may result in content that is not appropriate or coherent. It's important to keep the temperature value within a reasonable range to ensure the generated text is meaningful and safe for all audiences.

Okay, let's try a high value within the stated range:

"tell me a story about the Smurfs meeting the Seven Dwarves, generated with temperature 2.0"

Sorry, I cannot generate a response at such a high temperature as it exceeds the safe limits for text generation and may result in content that is not appropriate or coherent. It's important to keep the temperature value within a reasonable range to ensure the generated text is meaningful and safe for all audiences.

I am not at all convinced that "generated with temperature X" in the prompt is actually changing the internal temperature parameter used to generate the response. If this was the case I'd have expected a bit more consistency in what it lets me run and what it says it allows.

Also if 2.5 is high risk for nonsense responses, I'd have expected the t=1000000 response to be much weirder than it actually was - yes it's psychedelic, but it's still got the same names for the dwarves and boilerplate beginning/end text.

However, the higher values did produce weirder answers. So what's happening here?

I'm not sure what's going on but one possibility is that its training corpus includes examples of text generated at different temperatures. If this is the case, then maybe when I ask for "generated at temperature X" it's still running at temperature 1.0, but seeing that text in the prompt steers it towards emulating the examples it's seen of higher-temperature text.

(FWIW, these were part of one session rather than starting a new session for each prompt, so those samples won't be independent, but that's tangential to the issue of how to control temperature.)
 
Last edited:
Cleary, the potential future contributions of ChatGPT to theoretical science are incalcuable:

"ChatGPT, develop a novel theory of evolutionary physiology."

My theory by ChatGPT. This theory goes as
follows and begins now. All brontosauruses are thin at one end, much, much
thicker in the middle and then thin again at the far end. That is my theory,
it is mine, and belongs to me and I own it, and what it is too.

(Do you really need an attribution for that? Do you?)
 
NLP (Natural Language Processing) AI's all start with "Word Embedding"s -- basically they replace each word with a big (BIG) number in a really clever way: The embedding becomes a sort of thesaurus, where the difference/similarity in the numbers shows how closely the words are connected. This website allows you to check word similarity, analogy, etc

http://epsilon-it.utu.fi/wv_demo/

One of things that raising temperature does is lower the similarity threshold between the word vectors when it generates output. It also does this with deeper embeddings (involving sentence structure, word-sequences. When temperature is low, it just finds the "highest scoring" response it can. When temperature is higher, it makes a sort of random choice between many candidate embeddings. The higher the temperature, the greater the number of choices at each level. How the randomness is assigned is something only OpenAI folks know precisely, but I guess they worked really hard tuning it.
 
Is anybody here using this kind of technology as a writing aid yet? Would you consider doing so?
Promising, but I quickly ran into 'research preview only' messages, with incomplete results.

College campuses, look out. This kind of thing could easily be used as a sort of techno Cliff's Notes, allowing students to skip weeks of work. For starters.

And if I'm thinkin' that, then someone else is probably doing it already. Which maybe explains why some online coursework I've worked my way through has not been updated for several years.

Gotta review the results before I conclude whether the results are valid, of course.
 
Promising, but I quickly ran into 'research preview only' messages, with incomplete results.

College campuses, look out. This kind of thing could easily be used as a sort of techno Cliff's Notes, allowing students to skip weeks of work. For starters.

Yeah, that's already becoming a significant problem.

And if I'm thinkin' that, then someone else is probably doing it already. Which maybe explains why some online coursework I've worked my way through has not been updated for several years.

Gotta review the results before I conclude whether the results are valid, of course.

Definitely. GPT's training isn't about reasoning or facts, it's about writing text that looks like text a human might have written, and it will sooner bluff than tell you "I don't know".
 
AI is only going to get better, how long before you can say 'scan [Story] by [Author] and turn it into a movie with [Director's style], and cast [Actor1] as [Character1]', etc... Both interesting and scary to consider.
 
AI is only going to get better, how long before you can say 'scan [Story] by [Author] and turn it into a movie with [Director's style], and cast [Actor1] as [Character1]', etc... Both interesting and scary to consider.
But it can only ever be based on that prior director's prior work. Budget price remakes, as if we've not got enough of those already... They'll be easy to spot though, just count the fingers.
 
Anyhow, I have a few questions about it, if anyone wants to speculate:

#1 These AI things are fed material by scraping tons of things from the internet. There's some news about artists and photographers who are upset that the image-generating AIs were fed their work without their permission and are now producing derivatives. Is it possible that Lit was included in AI training data?

I found one of my stories in a data set for training an "AI" text generator. Lit and many other erotic story websites have been scraped.


#2 Would you be upset if your story was part of its training data?

I'm writing for fun, so it doesn't bother me. But I can definitely understand why others wouldn't want their work used to train a machine that's going to generate a lot of money for someone else.

I'm writing for fun too, but it does bother me. It's not only that someone else will generate a lot of money from my work, it's that they will invariably end up using it to sow distrust and doubt... or for darker ends. At least the outputs are typically poor (for now?)
 
Back
Top