Story incorrectly rejected due to AI

What I love about AI is how much it's challenging us to think about what it means to be human.
Is it really doing that though, or is it being hyped up as "artificial intelligence" by marketeers, media, people who want to sell you something?

It's a genuine question, since I don't have a clue how the AI tools we're seeing at the moment actually work. I can see that a very large part of it is predictive - algorithmic, to use @Bramblethorn's word (that is, what's the next most likely word or image, based on all the words of images that have been fed into a data base), but what's the basis of your comment, "challenging us to think about what it means to be human."

So far, I've seen nothing remotely human or truly creative in AI generated content. I do see a truck load of fast mimicry, but nothing gives me any sense of incipient "self awareness" within the machine. What have I missed (or am I being too cynical about hype)?
 
So far, I've seen nothing remotely human or truly creative in AI generated content
I think that might be a case of moving goalposts.

The main challenge posed by AI is not actually to do with its human-ness or otherwise, it's to do with its power to do "clever stuff". That power needs to be kept in the public domain, and not become the exclusive domain of a small number of "bad actors".

The challenge I was referring to, once the realm of science fiction, is the one repeatedly returned to by Philip K Dick: What makes us different from machines? As a pure mechanist, I've always been happy to say "nothing, fundamentally". And the "deus" is running scared right now, as AI closes in on it, each astonishing iteration of its capablities smashing the barricades around our sacred right to consider ourselves extra-special.

"Self-awareness", for some people , is the last bastion. I don't see that as fundamentally difficult to implement in software. There's no point in doing it right now, but it might become important when, say, an AI system is built that literally "values itself", seeing its survival as an important goal or subgoal in what its been trained on (like reducing carbon emissions, prosecuting a war, reducing costs at Amazon warehouses...)
 
"Self-awareness", for some people , is the last bastion. I don't see that as fundamentally difficult to implement in software.
Experts in the field disagree with that.

We don't even know how consciousness and self-awareness works in humans. Replicating it in a computer is a non-trivial issue when you can't even define it.

As it is now, AI is a complicated math solver.
 
Is it really doing that though, or is it being hyped up as "artificial intelligence" by marketeers, media, people who want to sell you something?

It's a genuine question, since I don't have a clue how the AI tools we're seeing at the moment actually work. I can see that a very large part of it is predictive - algorithmic, to use @Bramblethorn's word (that is, what's the next most likely word or image, based on all the words of images that have been fed into a data base), but what's the basis of your comment, "challenging us to think about what it means to be human."

So far, I've seen nothing remotely human or truly creative in AI generated content. I do see a truck load of fast mimicry, but nothing gives me any sense of incipient "self awareness" within the machine. What have I missed (or am I being too cynical about hype)?
We have to separate some concepts here.

The neural networks under what is commonly called AI these days are really, as nice90sguy said, a simplistic and comparatively crude approximation of how our brains work. They get more and more complex year over year, but we are still far away from making them as intricate as our brains are, when it comes to how neurons and synapses work. What we can do however is make them much bigger than our actual brain is. GPT4 - by my information - is about on par in size with the average human brain. 100 billion 'neurons' with 100 trillion synapses (links between the neurons) Future iterations are rumored to be ridiculously more complex than that. But anyway, mechanically, we have crude approximations of brains that are growing ever larger.

But these are just the meat. What they are is determined by what they are trained to do.

LLMs are just that. Large Language Models. They are purpose built to "understand" and "create" text. They are very good at it, but their understanding of it is different than ours. They understand text in the way of knowing how text usually looks like, which words usually follow each other, what is the usual response when someone asks what 1+1 is and so on. They know so much about text and their knowledge is broken down into so minuscule chunks, that by recombining their knowledge, they can give answers that seem almost intelligent. That makes you believe that they are understanding what you are asking of them, when in fact they are merely understanding your written text and know what the proper response to that kind of text is.

Worse yet, because of how the inputs and outputs of neural networks are not digital (1s or 0s), but "analog" (well, a digital representation of analog values down to a sufficiently high accuracy).
They will never exactly tell you that the proper answer to the question "how much is 1+1", would be "2".
What they will tell you is:
- that the answer "2" has a 64.651...% probability of being the correct next word to say.
- that there is ~11% probability of the next correct word being "why" (maybe leading to asking why you ask)
- or ~9% of it being "I" (as in "I don't know")
etc.

This is probably the main reasons why LLMs hallucinate as - in order to make their output more varieds - they not only pick the top answer, but semi randomly pick one of the likely next words instead. This artificially induced variability however sometimes leads to very dumb outcomes.

In essence LLMs are neural networks trained to figure out what the proper next word to say is for any given text, and with their insane complexity, they are VERY VERY good at it. So good in fact, that they can easily pass as a living conversation partner.

Now, can LLMs ever become sentient? That is the question isn't it? The short answer is no. They are built for a single purpose and are also lacking certain features that would be needed to facilitate that. However, that might (and I still believe will) change as bigger and bigger systems are built to do more and more complex and generic tasks, slowly moving away from purely just being language models and becoming into general purpose AIs.

Also, let's not even get me started on how we basically have no idea what goes on inside a neural network. It's a black box. We feed it data until it starts doing what we want, but nobody really understands how it does that, as that process is obfuscated in a network of 100 billion virtual neurons linked with 100 trillion connections. How do we know what else it does when we show it the face of a crying child and ask it what emotion that child is feeling? We were only looking for it to do one thing and we are only really watching that one thing it does, not caring what else goes on inside.

So can neural networks become sentient? Not yet, but only because we are not giving them the freedom that would be needed to grow and develop organically. How long will it be until we do? We will see I guess.

(disclaimer: the accuracy of what I wrote above is based on my understanding of these systems as someone who dabbles in that technology, but is nowhere near being an expert of it. feel free to correct me if you know better or you think something is way off. also, take it with a pinch of salt and do your own research if you are looking for 100% accurate factual data.)
 
Last edited:
Experts in the field disagree with that.

We don't even know how consciousness and self-awareness works in humans. Replicating it in a computer is a non-trivial issue when you can't even define it.

As it is now, AI is a complicated math solver.
Yes, there's a lot of confusion -- I commend anyone interested to read the ongoing work of Dan Dennett on this, starting with "Consciousness Explained".

Saying "AI is a complicated math solver" means nothing. Like saying "people are a lot of atoms", really. It's Searle's "Chinese Room" argument, stripped to the core
 
Experts in the field disagree with that.

We don't even know how consciousness and self-awareness works in humans. Replicating it in a computer is a non-trivial issue when you can't even define it.

As it is now, AI is a complicated math solver.
I would word that differently. I think we would not recognize it, on the grounds that we don't really understand it, but to claim it cannot happen because we cannot know how to make it happen assumes things always go the way we intend them to go, which I think is incorrect just as often as it's not.

Also, I think experts of the field are a bit dishonest in fear of having their funding cut or careers canned. Just look at what happened to those who raised concerns about things like these in recent years. Fired, ridiculed, pushed aside. I am not a fan of conspiracies, but I also believe in the innate need of self preservation. Just as it was discussed in a different thread how we might not publicly admit of our true sexual orientation, so would I expect any serious figure doing anything important in the AI space to either keep their "fringe" opinions to themselves or well... stop being a serious figure in the AI space :)
 
AGI is a hard problem, (the "G" is for "General"). And there's still hype around it obscuring the real, but modest progress that's being made. Most AI is domain-specific knowledge at the moment.

As a practitioner of AI in my day job in an R&D company, mainly in Speech recognition ("ML engineer" has been my job title for the past five years), I'm considered to be an expert in AI, certainly in practical terms -- but that doesn't make me qualified to discuss the philosphical issues around it all -- maybe it's even blinkered me somewhat. But it so happens my original interest that prompted me to study AI back in the 1970's is how it relates to the philosphy of mind.

WHY people are "creative", can solve problems, imagine possible worlds, isbecuase it's turned out to give us an edge in the competition. HOW we do it is a more interesting, but separate question for me. And machine learning, in my opinion, sheds very little light on how we do it, unfortunately; it's not a great tool for gaining insight into human thought.

But it does show us that we're a bit confused by what we mean by consciousness, intelligence, self-consciousness etc. Most importantly, we can't go on moving goalposts every time some AI does something we've hitherto regarded as the exclusive domain of humans.
 
That's the first time you've got ML slightly wrong (or misleadingly oversimplified) in any of your posts. I'm sure you know very well what ML is, but it's really so non-deterministic, so much an art rather than a method, that calling it algorithmic, i.e. a mechanical procedure, falls into the trap that so many people do (including great philosophers like Ronald Searle), that it's "just" that.

I'm not seeing the distinction you're making here. Might need some clarification on this one.

Specifically, I wouldn't consider determinism to be a requirement for something to be an "algorithm". There are many non-deterministic algorithms out there, e.g. simulated annealing and Markov chain Monte Carlo. (Unless one wants to split hairs about PRNGs vs. "true" RNGs, and I'm not quite that pedantic.)

I'd agree that there is a great deal of "art" involved in designing or using a ML tool - choosing the specifics of the method, finessing parameters, etc. etc. There's art in just about any programming work of any complexity. I'd still consider the end result to be an "algorithm"; it works through a set of defined instructions.

I get the impression you might be interpreting my use of "algorithm" as attempting to diminish the technology? If so, no, that wasn't at all my intention with that choice of wording. I do have a certain amount of cynicism about the current wave of "AI" technologies - I'd summarise my attitude to them as "impressive achievement but dangerously over-hyped" - and I've expressed that cynicism in these discussions before, but "algorithm" isn't part of that cynicism.

There's a school of thought that I'll call "magic meat", which comes down to something like: true intelligence depends on some special property of organic brains, which can never be achieved by machine intelligences. I'm not a magic-meat proponent. I acknowledge differences between how organic brains function and how computers function; I haven't seen anything to convince me that those differences should make it impossible for a computer/algorithm to be as creative, versatile, and "intelligent" as a human mind, for any reasonable definition of "intelligence".

I don't know if we'll ever be smart enough to build such an intelligence, and specifically I don't think the current batch of generative AIs are anywhere close to that state. But that's a statement about specifically these "AI"s, not about inherent limitations of machine intelligence.

Modern AI, since backpropagation was invented by Geoffrey Hinton, is much more like the way neurons work in the brain than how a calculator works. Calculators implement the original "algorithms" -- the methods of Al-Khwārizmī -- of arithmetic.
I am not a neuroscientist, so I can't give a great response to this, but my understanding is that a lot of neuroscientists might disagree with this statement. There are certainly similarities - a system composed of small interconnected units which are able to excite one another in complex ways, with interactions leading to complex behaviours - but there are many important differences.

(Again, not making a magic meat argument here. When I say "important differences", there is no implied "and that's why machines can never think" attached. More like "and that's why a thinking machine, if built, probably wouldn't just be emulating the way a human brain works".)
Calling modern AI alogorthmic is as true as calling the way brains work algorithmic. After all, brains are "just" a bunch of neurons, more complex than the transistors in a computer, but basically operating by simple, deterministic rules.

Again, I'd want to phone a neuroscientist on this one, but I have major doubts about "deterministic" here. My non-expert understanding is that human brains are very likely non-deterministic, since they're highly non-linear systems with the potential for even quantum-mechanical-level randomness to be amplified to significantly affect their "output".

(This susceptibility to quantum-mechanical-level effects is something that the magic meat folk like to latch onto as an argument for why meat brains are inherently superior; again, I don't subscribe to that argument.)

What I love about AI is how much it's challenging us to think about what it means to be human.

1704552229176.png
 
I am so glad I woke up this morning and logged in to read this thread. I will likely have to buy a new hat as the one I wear now seems too small. I suspect it is from some neurons igniting in corners of my mind that I had no idea existed. Like the answer from Buddha, when asked, 'What makes us human?' While seeking enlightenment, I've checked some traffic lights and motorcycle boxes; now I know I am human.

Meanwhile, I want to acknowledge that you have contributed to this thread with meaningful commentary and courtesy.

I need coffee before I re-read this AI background data dump to grasp some of this. No one has yet to mention the Terminator somewhere in there - I take comfort in that.
 
but my understanding is that a lot of neuroscientists might disagree with this statement.
Not the "much more like..." bit. They're toy implementations, but certainly have the most salient features of neurons, namely resting and action potentials, and "summing" of inputs.

"deterministic"
Again, the Chinese Room. Whether or not there's quantum effects is not really the issue, or "unpredictability" -- or rather the lack of a shortcut way to predict the outcome without actually running it. What I mean, is that neurons are relatively simple and un-magic, brains ARE hugely complex and, IMO non-magic too.

And I really doubt if any the extra features and complexities that are present in evolved brains are necessary to implement or simulate in software to get "real" intelligence, creativity, etc.



1704567959375.png


That's probably the best comment I've seen on the forum. It's so deep.
 
Not the "much more like..." bit. They're toy implementations, but certainly have the most salient features of neurons, namely resting and action potentials, and "summing" of inputs.

Those are relevant, but there are also important differences, e.g.:

In (artificial) neural networks, the connections are generally fixed early on, and learning is achieved by changing the weights of those connections. In biological brains, learning involves forming new connections. I think the mechanisms by which that happens are quite different to back propagation, but I'm way out of my depth there.

ANNs generally work in a discretised/cyclical mode: we provide the inputs as function arguments, and then the activation of any given neuron is determined by those arguments, independent of any previous runs. If I show an image-recognition ANN a picture of a cow, and then show it a picture of a horse, the calculations it performs while processing the horse are independent of what it did on the cow. OTOH, if I show a human those two pictures, the internal state of the brain when it sees the horse might still be influenced by seeing the cow. If I show it enough images in quick succession I can do interesting things like exciting or fatiguing the relevant neurons to influence how it might respond to an ambiguous case. (cf. Sir Humphrey's explanation of opinion polling.)

ANN research is a vast field, I'm not an expert, and for any human-brain feature that I could identify, I expect there's some ANN researcher exploring ways to emulate that. But those methods aren't part of the basic ANN concept; they have to be designed in deliberately rather than emerging, and even when the aim is to emulate something the human brain does, the mechanism might not be the same.

For instance, some ANNs do achieve a degree of persistence by feeding their own outputs from one run back in as part of the inputs to the next one; by my understanding, this is what ChatGPT does when responding to a series of prompts. But a human neuron remaining excited for a period of time isn't the same thing as an artificial neuron being repeatedly re-excited by poking it indirectly from the input layer. The latter is more like the guy in Memento who gets important info tattooed on himself, so that it keeps getting passed back into his brain.

Again, the Chinese Room. Whether or not there's quantum effects is not really the issue, or "unpredictability" -- or rather the lack of a shortcut way to predict the outcome without actually running it. What I mean, is that neurons are relatively simple and un-magic, brains ARE hugely complex and, IMO non-magic too.

Emergent behaviour is an amazing thing.

And I really doubt if any the extra features and complexities that are present in evolved brains are necessary to implement or simulate in software to get "real" intelligence, creativity, etc.

Agreed. I'm open to the possibility that they might be, but I've seen no compelling evidence to persuade me of that.

I quite like Dijkstra's line: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Early attempts at building powered flying machines concentrated on flapping wings, because that's how birds fly and we wanted to copy them. Eventually people figured out that this wasn't a great way to achieve mechanical flight, and started experimenting with things like propellers which are almost completely absent from nature. Modern aircraft still have some things in common with birds, but nobody cares whether the latest 787 looks more like a real bird than its predecessors.

Personally, if we ever get to generalised AI, I'm hoping for a similar outcome. We already have seven billion human intelligences on the planet. It might be more useful and more wonderful to create something we don't already have, something that takes advantage of the differences between silicon and organic neurons rather than trying to erase those differences for the sake of passing a Turing test.
 
I just hate that it takes to long to get feedback on my pending work only for it to be wrongly rejected due to AI. Like ok, I am willing to edit stuff and make it seem less AI (whatever that means tbh) but taking 2 weeks in pending just to be rejected, then waiting another 2 weeks not knowing if it will be accepted finally or rejected again, is just unbearable. meanwhile seeing other authors' stories being posted every other day.
Obviously not blaming the mods for this, because as far as I can tell, it might just be 1 or 2 people actually reading and approving all the stories, but it's a frustrating situation to be in
 
In (artificial) neural networks, the connections are generally fixed early on, and learning is achieved by changing the weights of those connections.
An early idea in AI was a sort of machine that kept rewiring itself as it learned. Most ML platforms don't allow this easily, just as most compilers no longer allow self-modifying code, and for the same reason -- it makes it really hard to optimise the performance unless the network graph is known and fixed.

In biological brains, learning involves forming new connections.
A recent discovery, neuroplasticity. It gives heart to us old people to know that it never stops.

I think the mechanisms by which that happens are quite different to back propagation, but I'm way out of my depth there.
Geoffrey Hinton, who knows a thing or two about this, has a very intersting article on the "forward-forward" alogithm, his latest area of research: https://arxiv.org/pdf/2212.13345.pdf
TL;DR; Yep, they're different. Brains have evolved to be very economical with energy use; dreaming is like a GAN - generating fake inputs to help us discriminate reality from fantasy.

The above artcile is full of gems: I love this one, pertinent to GPT and other LLMs:
If language has evolved to facilitate the learning of internal representation vectors by the members of a culture, it is not surprising that training a large neural network on massive amounts of language is such an effective way to capture the world view of that culture.

If I show an image-recognition ANN a picture of a cow, and then show it a picture of a horse, the calculations it performs while processing the horse are independent of what it did on the cow.
One of the networks I built uses stochastic learning - effectively it's continuously learning. The model is always in "train" mode, never in "eval" mode. This kind of thing is important when you can't provide a good distribution of input data for training, because its always changing. In that case, the previously seen cow modifies the netwrok weights, making it better at recognising the horse
 
Last edited:
This guy gives a lucid explanation of how a base model (the black box) is created and how a generative script is encoded as well as limitations, tips on jailbreaking, how and why, and why the bots can't think reflectively - yet.


 
or is it being hyped up as "artificial intelligence" by marketeers, media, people who want to sell you something?
This is the thorn in my side about GPTs.
As far as I'm concerned, they're just very sophisticated maximum likelihood estimators with very clever outputs. They're astoundingly capable, but not "intelligent." Their most powerful feature is their capacity to enrich a select few people on the backs of a LOT of people. OpenAI promised to be open source (hence the name). That changed as soon as they realized how much money they stand to make.
 
This is the thorn in my side about GPTs.
As far as I'm concerned, they're just very sophisticated maximum likelihood estimators with very clever outputs. They're astoundingly capable, but not "intelligent." Their most powerful feature is their capacity to enrich a select few people on the backs of a LOT of people. OpenAI promised to be open source (hence the name). That changed as soon as they realized how much money they stand to make.
Well, that highly depends on your definition of intelligence.
They can understand what you write.
They can infer your intentions and respond appropriately.
They have a wast array of factual knowledge at hand on which they can base their answer.
They can pass a text based IQ test with a score that easily puts them in genius territory
They operate on a neural network that's a crude mimicry of how biological neural networks, like a brain would work.

For all intents and purposes, they are as close to artificial intelligence, as we can get, even if their current use is really just an extremely sophisticated and fancy scribble machine to find the next word that fits well with the last N words on the table.

I see where you are coming from btw, as I for a very long time have felt the same, but the more I thought about it and the deeper I dug into the area, the more I realized, that I was wrong and that they really can be considered AI, even if we use their output in a very narrow, specific and randomized way.

Just think about it, if I were to ask you which country in the world has the highest GDP, you would reach into your mind and try to find the knowledge and come up with one of several answers:
- no idea
- don't know exactly, I think it's A, but it might also be B or C, not sure
- I know exactly, It is A with x and the second highest would be B with y.

This is pretty much what the LLMs currently do with words. This is what you yourself do with words when you write. "Which word should I use to best express how I feel about AI? sophisticated? advanced? complex?" You then weigh each option and pick the one you like the most given the context.

Also, don't mistake open source (as in the source code to make an AI) to be the same as free or to also include the data on which the AI runs. Plenty of open source code out there you cannot run, because the data it depends on is proprietary. Also still more open source projects give you access to the code, but don't grant you license to run the code. They just show it to you so you can see there is no shenanigans in it, or learn from it.

You can have your own LLM at home if you want and I actually have zero idea right now if that is using any of OpenAIs code, but it might for all I know, as OpenAI really has no reason to keep their code secret or prorietary. It is not the code, it is the training model that makes an AI tick and that model is worth billions in the case of OpenAI, so yea, they would be crazy to share it and not monetize it, if for nothing else (assuming they consider themselves nonprofit), then to fund their own further work into even more advanced AIs.

Also, even if they would share the model, that thing is huge. There is no consumer grade hardware that would be capable of running it and there was an immense cost behind training the model. Rumor has it they used about 25000 Nvidia A100s for about 3 months to train GPT-4.

The quotes I saw for renting one of those things for just an hour is between 1 and 2 dollars TODAY, but back then it was likely a lot more expensive due to it being newer. We are talking about 54-60 million hours of GPU time spent training. Even at just 1 dollars an hour, that's no chump change to just freely give away afterwards.
 
Last edited:
What about non-native speakers?

I wouldn't categorise non-native English speakers and those lacking in skills alongside those with genuine accessibility requirements stemming from sensory, cognitive or any other form of impairment. Those lacking in skills and education are not cognitively impaired.

To your wider point, I spoke to a gentleman here a few weeks ago who's a non-native English speaker. He argued, without shame, that the new rules on AI shouldn't apply to him and that, as such, he should receive preferential treatment.

However, he doesn't have to publish here in English. He doesn't have to publish here at all. None of us do. You may not be aware, but there's a non-English section on Literotica that contributors are welcome to use.

We just want to share our fantasies

Good, and I wish you the best with it. But you'll conform to the rules of the website in doing so, just like everyone else.

You're not the only one who's spoken up in recent weeks, arguing that those lacking in skill should be given special treatment. What next? That bad writers aren't allowed to get one or two-star ratings?

Literotica Three Stars.jpg

Interestingly, those with this gift often resist any literary assistance, maybe hoping to keep their edge.

I touched on that earlier when I compared AI and software tools to a catch-up mechanic. But it's not the quality of AI-produced manuscripts that I'm primarily concerned about.

Good writers aren't going to be surpassed by an army of mediocrities, armed with activation keys for ProWritingAid. No good writer needs to worry about that. They're keeping their edge, hope has got nothing to do with it.

I've got two main issues at the moment. The first is that I think all contributors to this platform should be treated fairly and equally, where the rules apply to us all without exception. While the second is that any serious publisher should take whatever action is necessary to prevent their platforms from being flooded with shit AI content that's produced on an industrial scale.

I'm glad that Literotica's cracking the whip on the latter, and I fully support the new rules.
 
It is one thing to be asking for the acknowledgement that we are an international community with different people coming from all sorts of different backgrounds, some writers to begin with, others just enthusiastic people eager to share their own fantasies with the rest of us.

It is a completely different thing to ask for preferential treatment or special rules.

I could argue, that the current rules are unfairly biased towards native speakers of the language or those in certain professions or having certain background and you telling people to either git gud or fuck off is actually just the kind of ignorance you appear to be so upset about.

Just some food for thought.

This has been debated to death. Legal reasons, fear of lawsuits. I don't blame Laurel for doing it. I don't expect her to change it either until something in the current climate around AI changes. That doesn't mean it's a good solution. It's like cutting off your arm because you are afraid your wound might get infected. Pretty severe and quite premature in my opinion, however little that is worth.
 
and you telling people to either git gud or fuck off

No, you've misunderstood. Terrible writers are, and have always been, welcome here, as long as they wrote their material themselves. That is an irrefutable fact.

The only authors who Literotica wants to take action against are those who use AI and software to contribute to their published drafts, either in part or in full. I agree with that.

I could argue, that the current rules are unfairly biased towards native speakers of the language or those in certain professions or having certain background

You could argue that, but you'd be wrong.

What do you want Literotica to do for you, Caleb? All publishers have some form of editorial guidelines that contributors must adhere to. If a publisher rejects a submission for failing to do so, they're not discriminating against the author. It's not biased to refuse to publish their work. Literotica created a non-English section so that more people could participate here, not less.

Is it a case of bias that your technical knowledge on AI far outstrips mine? No.

Why not? Because I never studied it. I can barely code a few lines. Does that make me a victim? No. It just makes me unqualified in that world. But the last thing I'd do is run to a software or coding forum and paint myself as a victim because I never took the time to learn that skill set.

I used ChatGPT to generate an expression in After Effects for a fancy animation that I wanted to create. It came out really well. However, I would never have the gall to tell another person that I learned how to do it just by hitting 'accept'. I'd never be so naive or delusional to tell myself that I'd fully absorbed that into my repertoire after spending 30 seconds entering a clumsy prompt and moving the code across.

I'm also 100% in favor of following the rules, though in my personal opinion they are there for copyright reasons mostly.

Unfortunately, while that's your opinion, you don't have any evidence to support that. Literotica has decreed that it wants stories written by humans, for humans, with user experience in mind.

I've really enjoyed your posts in the various AI threads. I'm awed by your technical knowledge and your understanding of AI. I'm in no doubt that you're an incredibly talented guy. If you think Literotica's made a mistake, then offer a counter-argument.
 
I wouldn't categorise non-native English speakers and those lacking in skills alongside those with genuine accessibility requirements stemming from sensory, cognitive or any other form of impairment. Those lacking in skills and education are not cognitively impaired.

To your wider point, I spoke to a gentleman here a few weeks ago who's a non-native English speaker. He argued, without shame, that the new rules on AI shouldn't apply to him and that, as such, he should receive preferential treatment.

However, he doesn't have to publish here in English. He doesn't have to publish here at all. None of us do. You may not be aware, but there's a non-English section on Literotica that contributors are welcome to use.



Good, and I wish you the best with it. But you'll conform to the rules of the website in doing so, just like everyone else.

You're not the only one who's spoken up in recent weeks, arguing that those lacking in skill should be given special treatment. What next? That bad writers aren't allowed to get one or two-star ratings?

View attachment 2304024



I touched on that earlier when I compared AI and software tools to a catch-up mechanic. But it's not the quality of AI-produced manuscripts that I'm primarily concerned about.

Good writers aren't going to be surpassed by an army of mediocrities, armed with activation keys for ProWritingAid. No good writer needs to worry about that. They're keeping their edge, hope has got nothing to do with it.

I've got two main issues at the moment. The first is that I think all contributors to this platform should be treated fairly and equally, where the rules apply to us all without exception. While the second is that any serious publisher should take whatever action is necessary to prevent their platforms from being flooded with shit AI content that's produced on an industrial scale.

I'm glad that Literotica's cracking the whip on the latter, and I fully support the new rules.
I want to chime in on a couple of things you said.
First of all, being a non-native English speaker myself, I must say that I agree that we shouldn't get any preferential treatment. When I first started writing here, I was making way more grammatical errors than I do now. Readers pointed it out in comments sometimes, some of them saying they were gonna rate the story lower because of that, some of them just trying to be politely helpful. I learned from those mistakes, did my homework, and improved. Discovering the basic version of Grammarly helped as well since it was catching a wider range of grammatical errors compared to MS Word grammar check. I took down all those stories for the reasons some here are familiar with and I used the opportunity to fix my mistakes. I still keep those versions on my computer as a reminder of where I was at the start.

To make myself clear, authors who are unwilling or unable to improve in the sense of English language skills should be able to publish here freely, without impediment, but they also shouldn't be able to use this fact as a justification for using AI-based aid. It isn't that much different from, say, using AI to generate dialog and justifying it with "I suck at writing dialog" or something like that. There should be a level playing field for everybody. When it comes to people who have some developmental impairment or anything like that, I admit I have no clue. I would like this place to be inclusive but I don't know how far that inclusiveness should go in this case. 🫤

The part about rating a story is the part where I disagree with you. While you yourself might be using the "Rate this story" feature to rate the story according to its quality (the way you perceive it), most people don't, in my opinion. Most people vote based on how much they liked the sexual content, the kinks, and so on, and they use it to stimulate the author to produce more of it. It is an understandable approach and one that leads to some wrong impressions among both readers and authors about the importance and the meaning of scores.
 
To make myself clear, authors who are unwilling or unable to improve in the sense of English language skills should be able to publish here freely, without impediment, but they also shouldn't be able to use this fact as a justification for using AI-based aid. It isn't that much different from, say, using AI to generate dialog and justifying it with "I suck at writing dialog" or something like that. There should be a level playing field for everybody.

I completely agree. Excellently put.
 
The part about rating a story is the part where I disagree with you. While you yourself might be using the "Rate this story" feature to rate the story according to its quality (the way you perceive it), most people don't, in my opinion. Most people vote based on how much they liked the sexual content, the kinks, and so on, and they use it to stimulate the author to produce more of it. It is an understandable approach and one that leads to some wrong impressions among both readers and authors about the importance and the meaning of scores.
I think it is the food critic fallacy (tm) that we see happen in some cases. It is obvious, that in a community of writers, people would be interested in getting reviews that assess their literary prowess. It is however not what most readers care about. They just want good food and don't really give a damn about how well plate composition really is nor would they be qualified to even tell. For them, it either "looks good, smells good, tastes good" or it is failing in any one of those categories, whereas critics go into minute details that the average consumer doesn't even know exists. They might also like something more just because it is outside of the norm, shaking up their otherwise mundane lives, when that novelty would in reality turn off most people as they just wanted their favorite burger.

I think review should be there for the readers to tell how much they liked the piece based on whatever criteria they have. We cannot expect them to be literary savants, so most will likely just rate based on how well the story managed to excite or entertain them. Some might be turned off if they see too many grammar errors. Others if they see too many blondes, yet others if they see a particular kink or scene.

Expecting those ratings to be anything more than an amalgamation of how successfully the story managed to capture reader interest is just weird for me. The best written, most engaging short novel can bomb if it doesn't have the content the readers were looking for.

I love to improve and I love actual critical feedback to what I do, so hey.. I would all be for a 'reviewer' score that would maybe be limited to people who already published posted on the site and would show as a separate review score from the general one. Given some of the interpersonal drama I observed in just two weeks since I read the boards, I am not sure it would necessarily work out as I imagine, but hey... that's my tip to solve a maybe non-existent problem.
 
You could argue that, but you'd be wrong.

What do you want Literotica to do for you, Caleb? All publishers have some form of editorial guidelines that contributors must adhere to. If a publisher rejects a submission for failing to do so, they're not discriminating against the author. It's not biased to refuse to publish their work. Literotica created a non-English section so that more people could participate here, not less.
I honestly do not get where all this adversarial attitude comes from. I don't expect literotica to do anything FOR me. In fact I made it quite clear on several occasion, that I am grateful for the site, its authors and its owners for all the fun they've provided me with over the years. It is me who wants to do something FOR IT. Contribute to it. Enrich it. Share part of myself with it.

I can only ask you to not assume ill will or intent on my part, as there is just none. I might not agree with you on many things. I sure as hell don't think that the current solution is a good one. I however have made it quite clear that I understand and accept its presence and really have no idea how to do it better, nor do I envy those who must figure it out.
 
Last edited:
Back
Top