Why is everything labeled as AI now????? I can't even post a story anymore.

again, I believe the whole ecosystem is a massive scam driven by a massive moral panic. AI, like the wheel, steam engines, and electricity, is here to stay. People need to stop panicking.
Last I checked the wheel wasn’t based on stolen property. I’m getting kinda sick of being told that finding GenAI problematic and limited in use cases is equivalent to being a Luddite (and the Luddites had a point too).

LLMs are a dead end technology with few use cases outside the wet dreams of broligarchs. We are constantly told they are getting better. Having ingested all human writing, I’d like to know how.
 
I’m getting kinda sick of being told that finding GenAI problematic and limited in use cases is equivalent to being a Luddite (and the Luddites had a point too).
I find AI problematic and very limited in use too. I find electricity problematic and limited in use, by the way.
I don't go around telling people that electricity is immoral, useless, or that we should go to extraordinary lengths to prevent people from using it. It's just a technology, like 1000 other technologies, that many people find useful but is certainly not going to be solve all of our problems or do everything for us. Most of our problems have nothing to do with technology.
I have no idea if you are a luddite or not. All I have said is that the moral panic about AI has to end. No idea if you are a part of it or not.
 
I find AI problematic and very limited in use too. I find electricity problematic and limited in use, by the way.
I don't go around telling people that electricity is immoral, useless, or that we should go to extraordinary lengths to prevent people from using it. It's just a technology, like 1000 other technologies, that many people find useful but is certainly not going to be solve all of our problems or do everything for us. Most of our problems have nothing to do with technology.
I have no idea if you are a luddite or not. All I have said is that the moral panic about AI has to end. No idea if you are a part of it or not.
I’m gonna follow @StillStunned’s sound advice.
 
I'm not sure what you are advocating here: a change of rules to allow AI, a change of policy where Lit doesn't conduct some sort of AI test, or what? Last I checked, Lit is privately owned, and Laural and Manu can set any rules they want.
I find AI problematic and very limited in use too. I find electricity problematic and limited in use, by the way.
I don't go around telling people that electricity is immoral, useless, or that we should go to extraordinary lengths to prevent people from using it. It's just a technology, like 1000 other technologies, that many people find useful but is certainly not going to be solve all of our problems or do everything for us. Most of our problems have nothing to do with technology.
I have no idea if you are a luddite or not. All I have said is that the moral panic about AI has to end. No idea if you are a part of it or not.
 
I'm not sure what you are advocating here: a change of rules to allow AI, a change of policy where Lit doesn't conduct some sort of AI test, or what? Last I checked, Lit is privately owned, and Laural and Manu can set any rules they want.
I agree Laurel and Manu can do whatever they want as the owners of the site.

It is my OPINION, that given the impossibility of accurately detecting AI, the site should change its policy, as follows:

1. Ban obvious AI crap. That means using the filters but raise the cutoff for what gets flagged. This should also reduce the number of submissions that need to be reviewed by hand.
2. Send automated messages to the author when their obvious AI crap is rejected, but make it clear that its due to obvious AI crap, not grammarly or whatever.
3. Review borderline cases by hand.
4. Refine process until authors who have accumulated hundreds of followers, favorites and comments are nerver flagged. Thats your negative control group. If you are flagging these people, you need to tweak your cutoff scores higher.
 
Suggest away, and expect the same response everyone else has had. You can ask the others what that response might be.
I agree Laurel and Manu can do whatever they want as the owners of the site.

It is my OPINION, that given the impossibility of accurately detecting AI, the site should change its policy, as follows:

1. Ban obvious AI crap. That means using the filters but raise the cutoff for what gets flagged. This should also reduce the number of submissions that need to be reviewed by hand.
2. Send automated messages to the author when their obvious AI crap is rejected, but make it clear that its due to obvious AI crap, not grammarly or whatever.
3. Review borderline cases by hand.
4. Refine process until authors who have accumulated hundreds of followers, favorites and comments are nerver flagged. Thats your negative control group. If you are flagging these people, you need to tweak your cutoff scores higher.
 
Got to be honest, getting a triple whammy of “rejected for AI” stories this evening was absolutely gut wrenching.

It felt like three years ago when I couldn’t even get through the front door.

I use ZERO AI, have done for three years, I don’t use Grammarly, or anything else.

I’ve published a 91,000 word novel here which was literally made up of all my previously rejected stories stitched together to make the novel.

Fast coming to the conclusion again that Lit for publishing, at least, is far too complicated, far too much effort, and very little payback to make it worth my time as a writer.

And I’ve got to be honest - “rejected for AI” is a fucking cop out on behalf of the forum admin. I’ve no idea what is triggering it. A month ago a short story written exactly the same way as these went through in 72 hours.

Sorry to vent but my god - what an awful thing publishing on here still is.
 
Courage. I am only a newcomer here, with a few works, and I was horrified when my latest was rejected. But I was sure it was just an automatic rejection, a bot that examined a few things and spewed out a threshold number. I resubmitted with an explanation and was promptly accepted.

So the initial rejection process is automated, not a human or final judgement. Don't take it as a judgement.

That was not for AI. It does seem that the AI rejection process is more mysterious, and sometimes recurs after people resubmit. Some people say they have added a message begging reconsideration - but how many of them, I wonder, were just 'I didn't use AI, honest, guv, I just ran some grammar checker over it'?

Perhaps a careful, polite message explaining the case might work. (My explanation to Laurel was such that it was all or nothing: the automatic detector made a mistake, or at least flagged something that was actually correct. Unfortunately 'AI' is much less cut-and-dried, so arguing your case would be harder.) My take-home message is that rejection is not final. Don't give up.
 
I have calmed down a bit from last nights “toys out of the pram” moment.

I am however still massively annoyed.

Courage. I am only a newcomer here, with a few works, and I was horrified when my latest was rejected. But I was sure it was just an automatic rejection, a bot that examined a few things and spewed out a threshold number. I resubmitted with an explanation and was promptly accepted.

So the initial rejection process is automated, not a human or final judgement. Don't take it as a judgement.

That was not for AI. It does seem that the AI rejection process is more mysterious, and sometimes recurs after people resubmit. Some people say they have added a message begging reconsideration - but how many of them, I wonder, were just 'I didn't use AI, honest, guv, I just ran some grammar checker over it'?

Perhaps a careful, polite message explaining the case might work. (My explanation to Laurel was such that it was all or nothing: the automatic detector made a mistake, or at least flagged something that was actually correct. Unfortunately 'AI' is much less cut-and-dried, so arguing your case would be harder.) My take-home message is that rejection is not final. Don't give up.
No, I’ve done that to death with Laurel previously three years ago.

There was one chapter of my novel I ended up cutting altogether because it went through so many rewrites/edits/reformatting that it ended up being a complete shadow of what I was trying to achieve, and didn’t make it in.

I’m going to take stock, re-read my rejected chapters, make some minor changes where I think they’re needed and try once more, but if they get rejected again for AI then I think I’m going to go publish elsewhere instead.

I cannot deal with the obviously varying standards of approach to approving work on this site. There are so many genuinely obviously AI written stories out there that they get through, and I’ve seen and heard enough of other Lit members having frankly wild turn around times for their submissions (like three hours recently? Wtf) to know that I’m not, like most, genuinely valued as a contributor here.
 
I think the criterion is effectively "could your story have been AI-written?" Not "was it actually AI-written?" Literotica is blocking certain styles of writing.

I imagine most mainstream publishers are doing this, and I can see why. What choice do they have?

This problem is also occuring in the music industry, although I'm not aware of any AI detection tools for audio.

I recently proofread the text of a game, which the dev claimed was all written by hand. I believed him, although I can see why he's constantly being accused of using AI to generate the narrative. All the images were AI generated (and really good). Luckily for him, itch.io and Steam don't use AI detectors on submitted games.
 
Why would we have to stop a moral panic about LLMs?

They use a huge amount of resources, and they don't do the thing they claimed they were going to do and never will. They are using stolen intellectual property in a way that is straightforwardly illegal, and only the slowness of the legal system and the unwillingness of the governments to go to war with such large piles of money has kept them from being shut down already.

These LLMs are definitely NOT here to stay. The data centers that power them aren't just expensive to build, they are operated at a loss. Once they no longer have billions of dollars of investor funds to throw in the furnace, those data centers will be shut down. LLMs aren't a durable good, it's like MySpace or Ultima Online, when there's no longer money to keep the lights on, the service will disappear from the internet. All of these things run at a loss, without Wall Street continually shoveling money in the hole, the music stops and there are no chairs.

So given that the LLMs will inevitably shut down and vanish, why shouldn't I join the winning team and gloat over their failures?
 
These LLMs are definitely NOT here to stay. The data centers that power them aren't just expensive to build, they are operated at a loss. Once they no longer have billions of dollars of investor funds to throw in the furnace, those data centers will be shut down. LLMs aren't a durable good, it's like MySpace or Ultima Online, when there's no longer money to keep the lights on, the service will disappear from the internet. All of these things run at a loss, without Wall Street continually shoveling money in the hole, the music stops and there are no chairs.

So given that the LLMs will inevitably shut down and vanish, why shouldn't I join the winning team and gloat over their failures?
You know, that this is not a good comparison, as MySpace is basically the mother of all Online Social Networks and Ultima Online Servers seem afaik still to be online... - since 1998
 
You know, that this is not a good comparison, as MySpace is basically the mother of all Online Social Networks and Ultima Online Servers seem afaik still to be online... - since 1998
We really don't have a great comparison for a failure of this magnitude. I mean, BetMax tape can still be read on a BetaMax player. ChatGPT is going to shut down and leave absolutely nothing behind except a bunch of giant security holes where various peoples' computers have function calls to a service that no longer exists.
 
We really don't have a great comparison for a failure of this magnitude. I mean, BetMax tape can still be read on a BetaMax player. ChatGPT is going to shut down and leave absolutely nothing behind except a bunch of giant security holes where various peoples' computers have function calls to a service that no longer exists.
Take your pick of dead NFTs.
 
Why would we have to stop a moral panic about LLMs?

They use a huge amount of resources, and they don't do the thing they claimed they were going to do and never will. They are using stolen intellectual property in a way that is straightforwardly illegal, and only the slowness of the legal system and the unwillingness of the governments to go to war with such large piles of money has kept them from being shut down already.

These LLMs are definitely NOT here to stay. The data centers that power them aren't just expensive to build, they are operated at a loss. Once they no longer have billions of dollars of investor funds to throw in the furnace, those data centers will be shut down. LLMs aren't a durable good, it's like MySpace or Ultima Online, when there's no longer money to keep the lights on, the service will disappear from the internet. All of these things run at a loss, without Wall Street continually shoveling money in the hole, the music stops and there are no chairs.

So given that the LLMs will inevitably shut down and vanish, why shouldn't I join the winning team and gloat over their failures?
The LLM's produce a weighted network of a certain number of parameters. Eg: DeepSeek, and others, use open-source weightings. Weightings can be stored and trained on many modern home computers. DIY! Coupled to a search engine these can update their responses. It may be desirable that LLMs are retrained using better training algorithms to better train and update them, but user AI won't be going anywhere soon.

The tulip bubble went bust leaving a lot of speculators ruined. Tulips are still with us, a much-loved feature of our spring gardens.
 
One difference between AI and NFTs is that I can imagine someone might want AI.
 
One difference between AI and NFTs is that I can imagine someone might want AI.
NFTs had a use case, which was money laundering. Person A could send an arbitrarily large amount of money to Person B and receive a hideous picture of an ape. This could legitimize the acquisition of money by Person B so that neither Person A nor Person B would have to report a financial irregularity or a transfer of contraband.

LLM-style AI makes sub-professional level creative output. What is the economic use case of that?

I mean, I can write but I can't draw, so having sup-professional level illustrations on my stories could potentially be better than I could do on my own, but that's not economically viable. There's no output that justifies the cost of running the servers. From an economics standpoint, it's just a complex machine to turn money into less money.
 
LLM-style AI makes sub-professional level creative output. What is the economic use case of that?
I think there are applications for non-creative purposes. I have found AI useful for generating simple macros and summarising meetings.

Also, we can assume that the cost of running models will decline, for example, the latest Chinese models run on cheaper chips.
 
I think there are applications for non-creative purposes. I have found AI useful for generating simple macros and summarising meetings.

Also, we can assume that the cost of running models will decline, for example, the latest Chinese models run on cheaper chips.
Cheaper chips reduces the setup costs. Right now, the setup costs are so ridiculous that it is genuinely inconceivable for those data centers to ever make enough profit to justify their capital investment. But even if that went away, if for example the companies building these white elephants went out of business and the data centers got bought up by new companies for a song at a bankruptcy auction - it still doesn't matter. Actually running the things costs more than the value they generate. For very hundred dollars of intern-quality output they generate, they use more than a hundred dollars in resources.

The only potential future they have is that IF the outputs get good enough to be sold, THEN IP holders like Disney, Hasbro, and SONY can get the public facing ones shut down for being the lumbering IP violations that they are, and bring them in-house to extrapolate from intellectual property that the company actually owns. If LLMs ever get good enough to produce something economically viable, you won't be allowed to use them.
 
Cheaper chips reduces the setup costs. Right now, the setup costs are so ridiculous that it is genuinely inconceivable for those data centers to ever make enough profit to justify their capital investment. But even if that went away, if for example the companies building these white elephants went out of business and the data centers got bought up by new companies for a song at a bankruptcy auction - it still doesn't matter. Actually running the things costs more than the value they generate. For very hundred dollars of intern-quality output they generate, they use more than a hundred dollars in resources.

The only potential future they have is that IF the outputs get good enough to be sold, THEN IP holders like Disney, Hasbro, and SONY can get the public facing ones shut down for being the lumbering IP violations that they are, and bring them in-house to extrapolate from intellectual property that the company actually owns. If LLMs ever get good enough to produce something economically viable, you won't be allowed to use them.
I think it will be like any new technology: it takes a while to find commercially viable uses.
 
This is what I mean by moral panic. You have a bunch of smart people here literally arguing that AI will go away because it's "useless". Just because AI sucks at writing fiction does not mean it sucks at everything.

AI is here to stay. I will never again (barring TEOTWAWKI):
1. format documents
2. write the first draft of R or Python code
3. write automation code on my own
4. parse free text responses by hand
5. use basic search engines for complex queries
6. edit photos or draw or design graphics
7. outline SOPs, RFAs, protocols, memos, contracts, or newsletters
8. search for that one sentence in a 200 page PDF
9. merge data from different sources by hand
10. write my own AI prompts

It does not matter if ChatGPT goes away. I have three different AIs offline already. As long as there is electricity, I will continue to use AI, as will millions of others.
 
This is what I mean by moral panic.
1) other people not liking your new favorite toy does not represent a moral panic
2) conflict is not abuse
3) not all panics are moral panics
4) other people, elsewhere, having an actual panic about AI does not give you license to conflate them, people elsewhere, and the people here, who are being quite reasonable, and paint them all as the same
5) the overstatement of harm is a classic manipulation tactic
 
Last edited:
From UW:
"Moral panics are instances of mass fear based on the false or exaggerated perception that some cultural behavior or group of people is dangerously deviant and poses a threat to society's values and interests."

@AwkwardMD, that's what's going on here. There is mass fear, society wide. Here in LE, there are false perceptions about AI. One of them is that it's useless, which is what I was commenting on. The idea that it is perceived to pose a threat is self evident from the fact that we are screening for it.
 
Back
Top