What is the future of human art?

As advanced as it may get, I don’t see AI ever capturing the creativity and nuanced emotions a human can experience. The art it creates may be technically perfect, but will always lack souls.
I had an opportunity to visit the Louvre years ago. My wife is an artist so I thought I knew what art was. I was wrong.

To see a painting that looks alive and so soft it looks like if you touched it, you'd be touching velvet is proof enough for me that AI will never replace humans in the field of art.
 
I had an opportunity to visit the Louvre years ago. My wife is an artist so I thought I knew what art was. I was wrong.

To see a painting that looks alive and so soft it looks like if you touched it, you'd be touching velvet is proof enough for me that AI will never replace humans in the field of art.
Everyone should spend time in art museums. It can be almost transcendental.

Seeing them in person is a totally different experience than on a screen.
 
Simple. It’s the emotion, the passion, the feeling and intuition. At its best AI will degenerate to a series on ones and zeroes. It will be cold and clinical, exacting, too exacting. It will imitate and approximate emotion, but I don’t believe it will ever truly feel it or create it through its art.
I have contributed in some previous threads the fact that I have been playing around with Amazon's "Polly" application and even posted a version of one of my stories here in an audio version using the app to provide an example of what it could produce. That was almost two years ago. It turned out okay but required a lot of manipulation of punctuation and spelling of words to get it sounding even slightly human.

I recently saw that the was a new feature in Polly, called "Long-Form", which it claims to produce the most natural sounding speech for longer content. I tried it out on a story that was slightly under 40K words. While not perfect and still requiring some adjustments due to words that have different pronunciations (read, lead, etc.), there was very little input required to obtain almost human-like inflections, tones, and emotions in the speech. No one that I have shared the recording of this story with has recognized it as anything but human. It's been a small sample, but telling for me at least. (send me a PM if you're interested in hearing a sample)

I don't see this as a substitute for creating audio books, and most distributors of audio books specifically state that they will not accept anything but human voices for their audio books. However, what is going to be the position of Literotica on AI generated audio stories? I have one on here as a sample and wouldn't object to it being taken down if it violates any policy.
 
I don't see this as a substitute for creating audio books, and most distributors of audio books specifically state that they will not accept anything but human voices for their audio books.

It's going to be interesting to see when that changes. Right now they still need voice actors, the technology isn't quite there yet. 5 years from now, when you can simply hand a book to the computer, then have a proof listener double check it and call it good?
 
I doubt that their position will be any different than for stories.
I wonder. Like I said, my sample has been posted for almost two years and I made no secret of it being AI generated speech. A few of the comments address the obvious non-human speech as well. Maybe I'll PM Laurel and bring it to her attention for an answer.
 
I wonder. Like I said, my sample has been posted for almost two years and I made no secret of it being AI generated speech. A few of the comments address the obvious non-human speech as well. Maybe I'll PM Laurel and bring it to her attention for an answer.
I also have a story with AI content from Feb of last year. I put a note in for Laurel and she approved it. If I were to submit it today, it'd certainly by rejected.
 
What is your definition of art? Why do people have different tastes for what is considered good art?

AI is in its infancy, ChatGPT just passed the one year mark, imagine what it may be like a decade or two from now.

It's currently using inspiration from existing works but that's not the end of what it may become. The feedback it receives from its products will inform its future production. Do you think it will be static and only mimic?

It will continue to "learn" what affects human emotions, not just specific existing works but the kinds of experiences people are drawn to. It could make use of multimedia to improve the mood of a population or use techniques and methods similar to what performing 'mentalists' do to lead their audience emotionally and logically, most likely initially for some commercial benefit.

Twenty years ago many people thought self-driving and flying cars were impossible. Do you truly believe AI will never create anything new and unique? An electric unicycle was impossible until it wasn't.
 
Last edited:
I also have a story with AI content from Feb of last year. I put a note in for Laurel and she approved it. If I were to submit it today, it'd certainly by rejected.
Was it AI-generated text or speech? There may be a difference in the policies. Is a person's speech characteristics something that is copyrighted?
 
I had an opportunity to visit the Louvre years ago. My wife is an artist so I thought I knew what art was. I was wrong.

To see a painting that looks alive and so soft it looks like if you touched it, you'd be touching velvet is proof enough for me that AI will never replace humans in the field of art.
Saw a Monet exhibit. Same. No way AI touches the depth of what he did.
 
Trigger warning:
This thread might contain discussions of "AI" and its future on creative arts. If you are absolutely infuriated by the concept of "AI" or how it related to lit, please just stop reading here. I respect your stance and even though I would love to hear your perspective, I believe your health and happiness more important than indulging my curiosity. No, I am not trying to organize a secret cabal here to overthrow the rules, I am just genuinely interested in a discussion about some of the things I saw mentioned.​

Plea:
This is a sensitive topic, let's be mindful of each other's feelings in all directions. If you feel someone steps on yours, assume it was a mistake and try not to jerk your knee in response, but if possible let's all avoid stepping on each other's toes. If any mod feels this thread is a bad idea, please feel free to close it down.​

So, with those two out of the way, the premise is really simple. We have a technology which eventually - within the foreseeable future - is going to dominate our everyday lives. It already has some profound impact on literature and graphical artistry, but that impact is likely going to become much more significant as the technology progresses.

The main question is. What do we stand to lose.
- On the very short term, with AI technology being like this, as it gets wider acceptance?
- On the longer term, when AI gets to the point, that it can mimic our creativity and work perfectly or even surpass it?

What do we stand to gain? If anything.

On a more philosophical level: what do we even bring to the table, as humans, when it comes to art?
When you read a piece of art, does it matter how much effort went into it? Do you enjoy something more, knowing that someone struggled for endless nights to find the right expression as opposed to the work of the person who just asked around to get ideas from their writer friends? (or AI buddies - as sad as that might sound)

Finally, what are your hopes for our future with regards to AI in art? How do you see AI in your future life, if it's in it at all?

To kick it off, here are my answers to those questions:
- On the short term, I believe it's mostly sleep we will lose, as the controversies surrounding the unregulated, irresponsible and/or unmitigated use of AI will create all sorts of problems for people. What we see on Lit or how others struggle to find their own footing in this new landscape are prime examples of this.
- We will also see industries turned upside down, Neo-Luddite movements cropping up, fighting against the inevitable.

- On the long term, the main loss I believe will be our ability of being surprised by art. An AI artist can give you exactly what you want, always and future AI will be able to give you what you want perfectly. Sure, you will be able to ask it to surprise you, but the thrill of discovering something will be gone. You will just be handed random things that you either like or do not like, but you will not have the active action of 'looking for it' that gets rewarded when you finally find it. Other than that, when it comes to the actual mechanical form of the art in question, I believe AI art and human art will become interchangeable sooner than most of us would like.
- We might also lose the sense of need, without which there really can be no sense of fulfillment. If I can just ask the AI to give me anything I want any time I want, then I will never need anything. All my needs have been met.

- As for gains, I think it's obvious. If you think of something, you can just have the AI make it for you. Want to have a fancy decoration, just have the AI design it and 3D print it out for you. Want to have a particularly themed song? Just ask for it. Illustration for something? Just ask for it. In a sense, it is the consumerism utopia, especially since these AIs would likely be in the hands of a few big companies due to their sheer size and the technological requirements to run them.

- on the philosophical level, I don't believe in the soul. I think that everything in the universe can be reproduced as long as we can map its composition properly and have the means of reproducing that composition. In the age of quantum computing and neural networks, that we will eventually make a computer brain that can think and feel just like us is inevitable. I think we will soon possess the capacity to create machines that can surpass us in every possible mental characteristic and this scares the hell out of me, as I do not consider our species grown up enough to start playing God and create intelligent, thinking beings. Then again, we were never good when it came to restraining ourselves to the sensible, so as I wrote above, I find that this is inevitable and might also happen a lot sooner than we would like, if it has not happened already.
I think in my opinon.
The missing element for AI, will be invention.
Somebody always has to prompt it, to answer a question. Which of course first has to be developed.
Of course, AI, can then delve into it's huge data base and come up with suggestions.
What it cant do is come up with the question. Which has generated all the marvellous inventions throughout history. Going back to the wheel... Fire...
What do humans bring to the table... Questions....

Cagivagurl
 
I wonder. Like I said, my sample has been posted for almost two years and I made no secret of it being AI generated speech. A few of the comments address the obvious non-human speech as well. Maybe I'll PM Laurel and bring it to her attention for an answer.
I would hate to see it taken down. We already live in a world where we deny our own history and use the rules and standards of today to judge and censor our past.

That story is 2 years old. If we start to apply rules of today to things that were done in the past, then where do we draw the line? 1 year? 2 years? 5? 10? no line?

I don't think you should worry about something you did 2 years ago that were in line with the rules and norms at that time. Nor should you be ashamed of having lived your life in a particular way, so long as that way was in line with society's norms at the time. We evolve, learn and grow. That's part of being human. Denying our flawed past will only doom us to eventually regress and repeat our mistakes.

The way people are starting to judge history by the rules and norms of the present is just tragic and shows how disconnected people have gotten from the reality of human life.
 
There are three things here: The artist, the art, and the person "consuming" the art - i.e. reading it, looking at it, listening to it etc.

The question is how much the artist matters. I think the fact that there's a person behind the art matters a lot. If the Lasceaux paintings turned out to be done not by prehistoric people, but by machines, I don't think we'd resonate with them, in fact we'd feel cheated that our emotions got played with - it's the incredibale communication across tens of thousands of years -- "wow, I can feel a connection with whoever did that, so long ago".

It's the communication between people that really matters in art, I think.

But a lot of "art" is just formulaic, done by people who are going through a sort of sophisticated plagiarism of previous art. That stuff is easily mechanisable.

If I find a landsape beautiful, (I mean a real, actual landscape, not a picture of one), I don't worry or care about whether God "made" it or not. I just get a good feeling from it. Similarly with birdsong, the scent of roses. We don't always need an artist to appreciate thibgs aesthetically.

But art is something extra -- it's a communication between people. AI "art" isn't that, and will never be that.
 
I would hate to see it taken down. We already live in a world where we deny our own history and use the rules and standards of today to judge and censor our past.

That story is 2 years old. If we start to apply rules of today to things that were done in the past, then where do we draw the line? 1 year? 2 years? 5? 10? no line?

I don't think you should worry about something you did 2 years ago that were in line with the rules and norms at that time. Nor should you be ashamed of having lived your life in a particular way, so long as that way was in line with society's norms at the time. We evolve, learn and grow. That's part of being human. Denying our flawed past will only doom us to eventually regress and repeat our mistakes.

The way people are starting to judge history by the rules and norms of the present is just tragic and shows how disconnected people have gotten from the reality of human life.
I have no problem with it being taken down. It is even stated in the story disclaimer note that this was done as an experiment so that others could comment on the technology of AI speech generation at the time. Things have progressed to the point where the sample that is my story isn't even relevant anymore.

I'll leave that decision up to Laurel. AI speech generation is different than text and art generation in many ways. Whether she decides to treat it differently in the policies of this site are up to her. I simply think that some clarification related to "AI Content" might be required soon. If original human-created text is used to then create AI generated speech, is that an issue for her?
 
I have no problem with it being taken down. It is even stated in the story disclaimer note that this was done as an experiment so that others could comment on the technology of AI speech generation at the time. Things have progressed to the point where the sample that is my story isn't even relevant anymore.

I'll leave that decision up to Laurel. AI speech generation is different than text and art generation in many ways. Whether she decides to treat it differently in the policies of this site are up to her. I simply think that some clarification related to "AI Content" might be required soon. If original human-created text is used to then create AI generated speech, is that an issue for her?
I understand and respect your point. My reply was reflecting more on the present climate, where we are socialized to accept altering the past, so it can conform better to the present. I feel that to be wrong on so deep a level, that I can barely put it into words. Tragic is not a word sufficient to describe how I feel about it.

It is your work of course and the site is ran by Laurel, so I have no say in what is going to happen, nor would I blame you for wanting to conform nor Laurel for wanting to stay on the safe side of things. I would however feel terrible if it were to happen and I can only praise the fact that I'll likely not learn about it.

Ignorance - or in this case, simply not knowing - is bliss as they say.
 
I understand and respect your point. My reply was reflecting more on the present climate, where we are socialized to accept altering the past, so it can conform better to the present. I feel that to be wrong on so deep a level, that I can barely put it into words. Tragic is not a word sufficient to describe how I feel about it.

It is your work of course and the site is ran by Laurel, so I have no say in what is going to happen, nor would I blame you for wanting to conform nor Laurel for wanting to stay on the safe side of things. I would however feel terrible if it were to happen and I can only praise the fact that I'll likely not learn about it.

Ignorance - or in this case, simply not knowing - is bliss as they say.
I have to assume that when my audio story was submitted all those months ago, that Laurel listened to at least some of it. Maybe not, since it was based on an already published story here. If she did listen to it, the unmistakable AI-generated speech would have been obvious to her.

To your point about the past, I agree with you. We all know that there are many older stories on this site that stretch or break the current rules as defined. Let sleeping dogs lay.

I throw my story out there to seek clarification for others who might submit something with AI-generated speech in the future and face a rejection.
 
Honestly there is such a hubbub about this, its either A) going to get the regulation hammer or B) be so commercialized that access to its powers will be curtailed for the general public and be at the hands of corporations who already can’t effectively string together a story on their own.
 
Honestly there is such a hubbub about this, its either A) going to get the regulation hammer or B) be so commercialized that access to its powers will be curtailed for the general public and be at the hands of corporations who already can’t effectively string together a story on their own.

Can't stop the signal.

Once the genie is out of the box no one will be able to put it back in.
It really has the potential to upend industries. Think of the Kevin Smiths of the world making movies by maxing out his credit card.
Someday soon you will be able to make a movie sitting at your computer that looks as good as anything Hollywood is turning out.
You are a screenwriter who got replaced by AI?
Make the movie yourself with AI.
 
Honestly there is such a hubbub about this, its either A) going to get the regulation hammer or B) be so commercialized that access to its powers will be curtailed for the general public and be at the hands of corporations who already can’t effectively string together a story on their own.
Well, not sure if you knew, but it is entirely possible to run a pretty decent LLM on a single GPU home machine, as long as it is a sufficiently high performance, more recent piece with lots of VRAM. Obviously the LLM you can run at home will not compare or compete with the likes of chatGPT, but it doesn't have to either, as you can have it trained to a very specific task you want it to do, whereas most commercial LLMs try to cover a range of areas, so they require the greater complexity.

This entry barrier will get even less significant as hardware gets more and more advanced, especially since manufacturers will start making hardware specifically with AI in mind, potentially even for home consumers.

There is a decent sized and growing community of tech enthusiasts out there who are running various LLMs and image generation AIs on their home computers as we speak, so no. As I wrote earlier in some other thread. This is a Pandora's box and now that it's been opened, there really is no closing it.

I am honestly not sure if you can regulate how a new tool can be used, as that's what the current generation AIs are. Tools to make stuff. Think about it. If you don't have control over the manufacture of the tools or what they can output (see above), then how can you even tell, the new tool was used in the first place as opposed to some older tool?

You really can't and that's what we are suffering from right now. People are desperately trying to tell if a painting was done by hand or with a machine, but assuming the outcome is identical, there really is no way to tell and the outcome can be identical, as a machine can (either by design or through sheer luck) create something just as good as a human, or a human can create something that looks just as mechanical as if it was created by a machine.

Good Luck to whoever has to sort out this mess. I'm glad it's not me.
 
Last edited:
Honestly there is such a hubbub about this, its either A) going to get the regulation hammer or B) be so commercialized that access to its powers will be curtailed for the general public and be at the hands of corporations who already can’t effectively string together a story on their own.
I don't see how it can be regulated, TBH. Even if (massive 'if') the US/Canada, the EU and Australasia can agree on something, there's plenty of other actors who may have their own ideas, and then there are the unregulated places in the dark corners of the web, too.
 
Well, not sure if you knew, but it is entirely possible to run a pretty decent LLM on a single GPU home machine, as long as it is a sufficiently high performance, more recent piece with lots of VRAM. Obviously the LLM you can run at home will not compare or compete with the likes of chatGPT, but it doesn't have to either, as you can have it trained to a very specific task you want it to do, whereas most commercial LLMs try to cover a range of areas, so they require the greater complexity.

This entry barrier will get even less significant as hardware gets more and more advanced, especially since manufacturers will start making hardware specifically with AI in mind, potentially even for home consumers.

There is a decent sized and growing community of tech enthusiasts out there who are running various LLMs and image generation AIs on their home computers as we speak, so no. As I wrote earlier in some other thread. This is a Pandora's box and now that it's been opened, there really is no closing it.

I am honestly not sure how you can regulate how a new tool can be used, as that's what the current generation AIs are. Tools to make stuff. Think about it. If you don't have control over the manufacture of the tools or what they can output (see above), then how can you even tell, the new tool was used in the first place as opposed to some older tool?

You really can't and that's what we are suffering from right now. People are desperately trying to tell if a painting was done by hand or with a machine, but assuming the outcome is identical, there really is no way to tell and the outcome can be identical, as a machine can (either by design or through sheer luck) create something just as good as a human, or a human can create something that looks just as mechanical as if it was created by a machine.

Good Luck to whoever has to sort out this mess. I'm glad it's not me.
Note: LLM - Large Language Model.

ChatGPT is an LLM.
 
There is a decent sized and growing community of tech enthusiasts out there who are running various LLMs and image generation AIs on their home computers as we speak, so no. As I wrote earlier in some other thread. This is a Pandora's box and now that it's been opened, there really is no closing it.
I am one of these people. I run Stable Diffusion (image AI) at home. It's all open source with models and software easily accessible. All it takes is a decent video card with enough VRAM.

The community is very active and there are advances nearly every day. From a year ago, SD has had leaps and bounds in what it can do.

I don't see how it can be regulated, TBH. Even if (massive 'if') the US/Canada, the EU and Australasia can agree on something, there's plenty of other actors who may have their own ideas, and then there are the unregulated places in the dark corners of the web, too.

If they try to regulate something that is already out in the wild, it'll just push it underground.

And the people who want to regulate it are the giant companies that want to create regulatory capture to make it difficult or impossible for anyone else to enter the market. They are trying to corner the market via government regulation.
 
Back
Top