Story incorrectly rejected due to AI

Wordsmiths, like skilled bricklayers, excel at the meticulous task of arranging words in a line. However, it's just one skill among many that storytellers need. Until now, numerous brilliant, imaginative minds have refrained from writing due to insecurities about their language prowess.

They could easily create new worlds in their minds, envision captivating characters and scenes. They could make you feel and react, even stir the desire to leap into the screen and confront the antagonist. They could make you touch yourself... yet there's one crucial element they've missed: knowing how to lay down the bricks.

Luckily, this problem is now resolved with the advent of new tools.
Beautiful, I would love to like the post, just for these three wonderfully picturesque paragraphs. Yet I can't.

We don't need your permission to publish here. We aim to be read, and English provides the biggest stage, so we will use it.

This is a new era, and in case you haven't noticed, the battle has already been decided---you've lost. And without reliable detectors, you can't do shit about it. The only thing you're accomplishing is driving talented and established writers away in this senseless witch hunt.

We're in a new era, and all you can do is hail the new conquerors, the new rulers, the new gods. Hail the ascendancy of Mr. T.
If there is truly a battle here, if there will be winners and losers, then we've already all lost. Yes, there will be people who will be left behind. That's unavoidable. We are on the precipice of a huge change and there will be people who will not be able to make it all the way. Just as there are people who refuse to move out of their home that is about to be destroyed by a natural disaster and would rather die than to have their lives upended.

It is however not a win for any of us if people are left behind. It is a goddamn tragedy and that does not change regardless of how many people get new opportunities.

I am excited for the future, but extremely terrified as well, specifically because I know how hard it will be to educate people and make them move. Because I know how many we will likely lose in the process. Because I believe that just one would be a tragedy and it pains me to think of how much more than one it will be.
 
Well, that highly depends on your definition of intelligence.
They can understand what you write.

They don't even understand what they write.


Screenshot 2024-01-08 at 10.00.37 am.pngScreenshot 2024-01-08 at 10.01.26 am.pngScreenshot 2024-01-08 at 10.07.29 am.pngScreenshot 2024-01-08 at 10.07.51 am.pngScreenshot 2024-01-08 at 10.10.16 am.pngScreenshot 2024-01-08 at 10.10.41 am.pngScreenshot 2024-01-08 at 10.11.32 am.png

...and so on and so on.

Or this:



Screenshot 2024-01-08 at 10.19.49 am.png
[paragraphs of inspirational disability clichés mercifully snipped]
Screenshot 2024-01-08 at 10.20.33 am.png
Screenshot 2024-01-08 at 10.26.25 am.png
[more disability inspirational clichés snipped]

GPT is able to feign understanding because it's seen a lot of texts written by humans who did understand what they were talking about, and it can learn relationships of the form "hand is to foot as finger is to toe". So if it's seen "each hand has five fingers", it can come up with "each foot has five toes".

And if it's read stories about people with no hands learning to play the piano with their toes, it can remix elements from those stories about a man with no feet learning to play the piano with his fingers. But since it doesn't have any concept of what these words actually mean, it doesn't understand that "learned to manipulate the keys with his hands" is not a particularly unique achievement. Similarly, in the last example it seems to be trying to remix content about people tying laces with their toes, again without understanding the scenario it's describing. ("The other foot", you say...)

They can infer your intentions and respond appropriately.
They have a wast array of factual knowledge at hand on which they can base their answer.

In general, no. They are trained on a vast array of text, which includes both factual and fictional content, but they don't memorise the whole of that text. They'll fill in the gaps with confabulation, and they don't know which parts are real and which they've made up.

Some versions are designed to plug into external "sources of truth" but those tend to be much more restricted than the training data sets.

They can pass a text based IQ test with a score that easily puts them in genius territory

This says more about IQ testing than about machine intelligence.
 
Should we start an image sharing contest? :)
You show me failures, I show you successes and whoever gets the most pictures wins?

I never claimed ChatGPT is infallible and yes it can infer intentions from the way you formulate your query. At least it can guess what you are trying to accomplish, as they use that to provide more relevant answers. Do they get it right all the time? Well, people regularly manage to "jailbreak" GPT with prompts, so I'd say no.

Just for fun I did your test with GPT4 and I got correct replies every time. I wouldn't spam screenshots if you don't mind. I am happy to indulge you in a private chat if you are interested or if you don't have access to GPT4 yourself but want to see if there is any meaningful difference.

edit: I figure out you can actually share the conversation as a link, so here you go - https://chat.openai.com/share/a579bc0a-6c1e-4477-acfe-f5a1d9fe9a4f
- The fact that it gets a bit confused towards the middle can be explained by one of the follow up questions misleading it. Where it is asked about sisters, it correctly lists Pat as a sister. Then later when it is asked why Pat is female, it starts to look for previous context and finds that she was referred to as a sister thus she must be female. When asked, it correctly identifies the proper clue in the first snippet and also used that consistently and correctly throughout the chat from the very start.
- As for the motives, just for fun I asked it about why the questions were asked and the answer is a pretty good illustration of what I meant.

Yes, IQ tests are dumb, which just shows that when we talk about intelligence, it is not an exact science what we mean by it. So something that is ARTIFICIAL and can by certain definitions be considered INTELLIGENT could just as well be called an... wait for it... ARTIFICIAL INTELLIGENCE :)
 
Last edited:
You seem to be grappling with some comprehension issues (which might explain your alleged popularity on the other side). When I mentioned those gifted in handling language, I didn't mean to suggest that others lack literary skills.

In most cases, exceptional talent in one field tends to come at the expense of another. Unless your name is Shakespeare, the likelihood of having the entire toolkit is biologically implausible.

Wordsmiths, like skilled bricklayers, excel at the meticulous task of arranging words in a line. However, it's just one skill among many that storytellers need. Until now, numerous brilliant, imaginative minds have refrained from writing due to insecurities about their language prowess.

They could easily create new worlds in their minds, envision captivating characters and scenes. They could make you feel and react, even stir the desire to leap into the screen and confront the antagonist. They could make you touch yourself... yet there's one crucial element they've missed: knowing how to lay down the bricks.

Luckily, this problem is now resolved with the advent of new tools.


We don't need your permission to publish here. We aim to be read, and English provides the biggest stage, so we will use it.

This is a new era, and in case you haven't noticed, the battle has already been decided---you've lost. And without reliable detectors, you can't do shit about it. The only thing you're accomplishing is driving talented and established writers away in this senseless witch hunt.

We're in a new era, and all you can do is hail the new conquerors, the new rulers, the new gods. Hail the ascendancy of Mr. T.
Then, good-bye. Have a great future where you let machines do all the work for you and then pat yourselves on the back for it.

I don't want to be part of a society that holds up art made by machines alongside art made by humans. I will never cheapen humanistic achievement.
 
Then, good-bye. Have a great future where you let machines do all the work for you and then pat yourselves on the back for it.

I don't want to be part of a society that holds up art made by machines alongside art made by humans. I will never cheapen humanistic achievement.

Life is not always about what we want. I can only give you an example from my own profession, as in the span of the 50 odd years, we went from people punching holes into cards, then feeding said cards into machines. To someone dragging and dropping a few boxes on the screen, connecting a few dots and ending up with a "program", that the "early" programmer had to works weeks to create.

It's called progress. Yet, there are those in my profession, who still consider typing every line by hand the only TRUE way to really do programming.
(the ones who were punching cards have already retired :p, but I am sure there would also be holdouts for that, if they were still around)

I think it is natural and while sad, it's unavoidable. We hate to change and things like "AI" force us to change. 25 years ago, I was doing a lot more "low level"/detail work, than I have to do today. Today, I can build a complex solution in a week, because we have productivity tools that allow me to do so. Is that bad? Overall? No. In some aspects? Yes.
Who knows, in 10 years time, maybe all I will have to do is ask the AI for the solution and verify it is doing what I asked for.

(It is surprisingly hard to ask the right questions or to make the right wish, as anyone who has ever dealt with the "speak with dead" or the "wish" spells will undoubtedly know :))

Anyway, Things were more unique, more artistic back then. Every piece of software was a unique creation. Today it is more uniform, has less 'style'. That's not to say there is no room for true artistry in my profession any more, but it has been relegated to a niche. To the creators who come up with the truly innovative and creative solutions. Whereas the rest of us just creates cheap throwaway products for the masses, for that is what the masses demand.

I use this analogy, as - while not necessarily obvious for the outsider - writing a computer program is pretty much a creative process and we in the profession can in fact shed tears over particularly beautiful pieces of code. It's just a different kind of aesthetic that not everyone is attuned to.

I believe the same will happen to art as well and no, it will not make human artistry obsolete. In fact, as I implied in my opening post in a different topic, losing the human touch in art might cause us to crave it and will undoubtedly create a niche where true human art will still be present. Even if the "output" might not be easily discernible, between human and machine, just knowing that it was made by a human will likely enhance the experience.
 
Life is not always about what we want. I can only give you an example from my own profession, as in the span of the 50 odd years, we went from people punching holes into cards, then feeding said cards into machines. To someone dragging and dropping a few boxes on the screen, connecting a few dots and ending up with a "program", that the "early" programmer had to works weeks to create.

It's called progress. Yet, there are those in my profession, who still consider typing every line by hand the only TRUE way to really do programming.
(the ones who were punching cards have already retired :p, but I am sure there would also be holdouts for that, if they were still around)

I think it is natural and while sad, it's unavoidable. We hate to change and things like "AI" force us to change. 25 years ago, I was doing a lot more "low level"/detail work, than I have to do today. Today, I can build a complex solution in a week, because we have productivity tools that allow me to do so. Is that bad? Overall? No. In some aspects? Yes.
Who knows, in 10 years time, maybe all I will have to do is ask the AI for the solution and verify it is doing what I asked for.

(It is surprisingly hard to ask the right questions or to make the right wish, as anyone who has ever dealt with the "speak with dead" or the "wish" spells will undoubtedly know :))

Anyway, Things were more unique, more artistic back then. Every piece of software was a unique creation. Today it is more uniform, has less 'style'. That's not to say there is no room for true artistry in my profession any more, but it has been relegated to a niche. To the creators who come up with the truly innovative and creative solutions. Whereas the rest of us just creates cheap throwaway products for the masses, for that is what the masses demand.

I use this analogy, as - while not necessarily obvious for the outsider - writing a computer program is pretty much a creative process and we in the profession can in fact shed tears over particularly beautiful pieces of code. It's just a different kind of aesthetic that not everyone is attuned to.

I believe the same will happen to art as well and no, it will not make human artistry obsolete. In fact, as I implied in my opening post in a different topic, losing the human touch in art might cause us to crave it and will undoubtedly create a niche where true human art will still be present. Even if the "output" might not be easily discernible, between human and machine, just knowing that it was made by a human will likely enhance the experience.
This is very sweet and I like it very much. I hadn't thought about it that way. Thank you.
 
They could easily create new worlds in their minds, envision captivating characters and scenes. They could make you feel and react, even stir the desire to leap into the screen and confront the antagonist. They could make you touch yourself... yet there's one crucial element they've missed: knowing how to lay down the bricks.

Luckily, this problem is now resolved with the advent of new tools.

The problem isn't resolved because it's not the writer's own work.

What about non-native speakers? Does your broad prohibition also apply to them? And what about dyslexics or those without formal education who just want to present their stories to the readers in a decent way?

We don't need your permission to publish here. We aim to be read, and English provides the biggest stage, so we will use it.

Agreed, you don't need my permission, but you need Literotica's permission to do so. Seeing as you've failed to grasp the central point of all these threads, I'll be blunt.

You don't have Literotica's permission to publish here if you're using AI or software to contribute to your published drafts.

I'd also note, your remark about the biggest stage is crass when you consider the context of your original objection. You positioned it as an issue rooted in accessibility, fairness and bias, while carelessly invoking those with actual impairments. Of course, I knew what you really intended, but it's gratifying to see you concede that you just want access to the biggest stage, no matter if you use AI to do so.

I don't speak German. I don't speak Spanish. I don't speak French. However, I'm not making baseless claims that the website is biased against me because I'm not allowed to use AI to write entire stories in those languages. I don't have the skills to write there.

And without reliable detectors, you can't do shit about it. The only thing you're accomplishing is driving talented and established writers away in this senseless witch hunt.

We're in a new era, and all you can do is hail the new conquerors, the new rulers, the new gods. Hail the ascendancy of Mr. T.

I don't work for Literotica and I don't enforce their rules for them. The likes of MourningWarbler and BenETrate were given the opportunity to resubmit their work, but they instead chose to leave of their own accord.

There may well be new conquerors, rulers and gods coming into this space, I'm enormously excited to meet them. In the meantime, let me tell you who they won't be.

Lazy authors, clutching onto activation keys for ProWritingAid, aren't rulers or gods. Meanwhile, non-native English speakers who demand to be allowed special treatment to create in English using AI aren't conquerors.

First of all, being a non-native English speaker myself, I must say that I agree that we shouldn't get any preferential treatment. When I first started writing here, I was making way more grammatical errors than I do now. Readers pointed it out in comments sometimes, some of them saying they were gonna rate the story lower because of that, some of them just trying to be politely helpful. I learned from those mistakes, did my homework, and improved.

This is a marvellous example of a non-native English speaker putting in the hard work to hone their craft. I've got a huge amount of respect for writers like this.

If any of you disagree with Laurel on the new AI rules, reach out to them and try to change their mind.
 
Should we start an image sharing contest?
You show me failures, I show you successes and whoever gets the most pictures wins?

No, because that would be a silly way to judge it. If I replace the AI with a coin that I have in my pocket, and over a hundred trials it gives the right answer to a yes/no question 51% of the time, that doesn't mean the coin understands what I'm saying. It doesn't even mean it understands what I'm saying 51% of the time. It just means that it managed to conceal its lack of understanding 51% of the time.

I never claimed ChatGPT is infallible and yes it can infer intentions from the way you formulate your query.

The issue isn't that it's fallible; humans are fallible, and we still consider them intelligent.

We were talking about understanding, not accuracy, and the specific kinds of mistakes it makes in these examples tell us that it doesn't understand the words it's using. No entity that understood those words would tell us "there are two Smith sisters altogether, June and May" and in the next breath tell us that Pat Smith is a "sister". No entity with an understanding of "only one foot" would visualise a person with only one foot standing on one foot and using another foot to tie his shoelaces.

"Infer intentions" is anthropomorphising heavily there. Sometimes it gives the kind of responses one might expect from a human capable of inferring one's intentions, but it doesn't necessarily follow that this is what it's actually doing. The simpler explanation is that it's trained on a large number of texts written by humans who did infer intentions, and it's emulating what it's seen them do, rather than being able to infer intention itself.

Just for fun I did your test with GPT4 and I got correct replies every time. I wouldn't spam screenshots if you don't mind. I am happy to indulge you in a private chat if you are interested or if you don't have access to GPT4 yourself but want to see if there is any meaningful difference.

I don't have GPT4, no. But my understanding is that the main differences between GPT3 and GPT4 are to do with throwing more resources at it. I can see that taking the product from "no understanding but can bluff moderately well" to "no understanding but can bluff very well"; it's harder to see how this would impart actual understanding to a system that previously lacked it.

I'd be interested to see how GPT4 handles the following question, though:

Prompt 1: "Consider a chess variant which follows standard FIDE rules with three exceptions: queens can move through other pieces (capturing only the piece they land on), queens cannot themselves be captured, and the three-fold repetition rule is removed. Using algebraic notation, show a plausible match between Garry Kasparov and Anatoly Karpov playing under these rules, followed by commentary on the strategies employed in this game."
 
LLM can not think.

I'm not suggesting that they can. Where did you get that idea?

Your whole post shows, that you have not a clue what you are talking about, Bramblethorn. And that is one of the problems we have here, people talking about stuff they know nothing about. So, perhaps you take the time to do some research?

no, u wrong.
 
Pausing the ongoing semantics about LLM or its ability to think, for a commercial on behalf of a plagued writer, rejected as an AI user. The writer has received a third rejection notice - still laser-focused on the work as AI-generated.

The story was 100% from the author's imagination and written with grammatical mistakes that didn't hamper the storyline. It was rejected three times, even after following Lit's suggestion to seek help from an editor. The third rejection listed this: https://literotica.com/faq/publishing/publishing-ai for help. It is not helpful. It refers the writer to check out the Forum threads on AI and seek help here. [Which would be this forum site, I believe.]

Diligent work and editing to attempt to overcome the rejections have been made by the author and myself as an assistant in editing the story. I ran portions of the work through an AI detector [free version] as did the author. Those came back as 'human written' with 0% chance of AI-assisted or authored.

[I'm 98% confident that the grammar issues are resolved: no spelling errors, punctuation is good, and the storyline flows well - beginning to end with typical Lit author original writing structure.]

The author is very discouraged over this seemingly futile effort at getting the work published.

@Laurel and @Manu FAQ says it has yet to formulate a policy but is apparently applying some formula to reject submissions based on a claim of AI in the stories. The FAQ says it seeks input from the forum users. I don't know which ones - so are you those individuals writing to Laurel and Manu about straightening out this mess?

If not, can a thread be started that specifically addresses guidelines to the powers that be to create one and get input?

Anybody? ... Knock knock.

Is anyone putting something along these lines in the note to the editor box? Would it prevail?

"I affirm this story was solely a creation of my own mind in line with Literotica.com standards regarding AI submissions. The fantasy is solely my: creative efforts, experiences, and fantasies. I write my own stories – not using any AI written, in-block, or simple rewrite. I certify that I am the author of AND I own the copyright to this story submitted for publication on Literotica. I used an AI detector, and it concluded my story is ‘0%’ AI and is human-written. Dmallord, a lit writer, assisted with the editing, as you requested someone review it before resubmission. Please, post my story."
 
Last edited:
they're just very sophisticated <insert mechanistic term here>
The word "just" in these sort of descriptions, with its implication that we've somehow been hoodwinked, that there's some sneaky trick involved, prompts me, again and again in this thread, to refer to the canonical article by the deep-thinking and hugely-respected philosopher Ronald Searle, where he conducts a thought experiment called "The Chinese Room", to show how absurd it is to call the mechanistic, silicon-based processes underlying AI "intelligence".

I remember when the article came out, in the final year of my undergrad degree in AI. I was shocked that a philopsher I admired so much should make such a basic mistake as he did, and be so confused in his thinking.

Large Language models are fascinating for anyone interested in humanity's greatest invention -- language. We've been storing collective knowledge in language for a long time, and now have found a way to convincingly retrieve that knowledge mechanically. For reasons that are still largely unknown, LLMs do a great job of it.

It may be the case that some of the ways LLM's work has a parallel in the way we humans do it, but I doubt it. But that's not really that important; not right now. The main ongoing work now is to improve them, and, increasingly, to provide checks on their power.

In my story "Libby", which was informed by forty years of thinking and working in AI, I say that I explain in the appendix "MATE", the AI underlying the robot sex-worker. I never published the explanation in the end.

"MATE", is acronym . The "T" in the acronym stands for "Teleological", which means "purposive". It's the lack of purposiveness that currently differntiates nearly all AI from human (and animal) thinking. But AGI research is now focusing on this aspect of intelligence. Animals like us are wired for purposive behaviour, AI is not -- it's a tool for our use.

"Libby" herself tries, and succeeds in emacipating herself from being a tool (a slave to her Master) to a self-motivated entity. It's the story of Pygmalion, of all emancipation.

And it will possibly happen to AI-powered systems at some point. Only possibly, because humanity may not survive long enough for it to come to pass.
 
Diligent work and editing to attempt to overcome the rejections have been made by the author and myself as an assistant in editing the story. I ran portions of the work through an AI detector [free version] as did the author. Those came back as 'human written' with 0% chance of AI-assisted or authored.
Did the original drafts of the work, rejected by Lit as AI, ever return a guess of AI from a detector the author or you used? Like most, I haven't access to the rejected works. I'm not sure whether the problem is, a) that Lit's detector is eccentric or, b) that the flagged AI has been removed by human intervention/rewriting but Lit still rejects it.
 
Prompt 1: "Consider a chess variant which follows standard FIDE rules with three exceptions: queens can move through other pieces (capturing only the piece they land on), queens cannot themselves be captured, and the three-fold repetition rule is removed. Using algebraic notation, show a plausible match between Garry Kasparov and Anatoly Karpov playing under these rules, followed by commentary on the strategies employed in this game."
I think that question provides a great insight into how a Bramblethorn thinks
 
The word "just" in these sort of descriptions, with its implication that we've somehow been hoodwinked, that there's some sneaky trick involved
And I take offense to your implication that I'm using big words to sound smart.

Yes, we have been hoodwinked. We've somehow been convinced that it's fine for companies like OpenAI to suck up all the intellectual property that's out there on the internet to use unscrupulously for their own monetary gain, using a metric butt ton of human labor in very exploitative conditions to do so, all the while promising us that what they do is purely for the benefit of humanity.

I'm as impressed with ChatGPT's ability to spit out tepid text as I am with H&M's ability to sell us cute dresses for $10.
 
The author is very discouraged over this seemingly futile effort at getting the work published.

Oh, FFS. Just tell them to publish on SOL or something.

There are no guidelines they could communicate beyond "don't use AI to help write your story". As far as we know from the rejection notices, they run every submission through AI detectors and reject them accordingly. The sole appealing process is to try, and try, and try again, until someone takes pity and publishes the story regardless or runs their already posted stories through the same AI detector and deletes them retroactively.

How about, instead of asking for some kind of clarification for months without ever getting a straight answer, let the authors implement a three-strike system? Story gets rejected three times, publish somewhere else.

I already feel like there are barely any submissions to formerly very busy categories. Once all the categories only get single-digit submissions per day, Lit will kinda have to rethink its policy.
 
I think that question provides a great insight into how a Bramblethorn thinks
There was a quiz down the pub last night. I asked the quiz master to slip that one in. Everyone threw their beer glasses at him. Apparently, they were all using Chat GPT and that's what it told them to do.
 
It was rejected three times, even after following Lit's suggestion to seek help from an editor.
I always get the impression there aren't enough editors to go around. And the thing is, who'd be a volunteer editor? It's a marketable skill, so if I'm going to spend my time editing - and in many cases rewriting - someone's work, I'll do it for people who pay me.

Because line editing/copyediting is hard work. If a story's been rejected because it looks like it was written by AI, then it's even harder. And the chances are, the author isn't going to appreciate you changing the style, which is what you'll need to do.

It might be more rewarding to set up some kind of mentoring system. There are some easy tricks that anyone can learn if they're willing to listen and apply themselves, and that will make the text flow more naturally and read a lot less like AI.
 
There was a quiz down the pub last night. I asked the quiz master to slip that one in. Everyone threw their beer glasses at him. Apparently, they were all using Chat GPT and that's what it told them to do.
That particular example is too absurd to expect any reasonable answer from an AI or a human even. It changes the rules in such a profound way that it creates a completely new game where knowledge about the style of play of Karpov and Kasparov plays no role. Every game would end with white mating black with his first move.
 
Last edited:
I always get the impression there aren't enough editors to go around. And the thing is, who'd be a volunteer editor? It's a marketable skill, so if I'm going to spend my time editing - and in many cases rewriting - someone's work, I'll do it for people who pay me.

Because line editing/copyediting is hard work. If a story's been rejected because it looks like it was written by AI, then it's even harder. And the chances are, the author isn't going to appreciate you changing the style, which is what you'll need to do.

It might be more rewarding to set up some kind of mentoring system. There are some easy tricks that anyone can learn if they're willing to listen and apply themselves, and that will make the text flow more naturally and read a lot less like AI.
I hear your voice pointing out the difficulties of editing as being challenging. Yet, Lit has a sign-up of volunteers to do just that, and those who volunteer offer services for no charge. They may get some experience under their belts using Lit stories as a teaching vehicle before going into editing as a marketable skill. It may be true about the old saying, 'You get what you pay for.' I have had the pleasure of @kenjisato editing for me. It has put oil on the waters of critiques chastizing me. No one really knows why they volunteer; my best guess is that they gain satisfaction in helping others rather than considering monetary gain as a byproduct.

That is absolutely correct about a rejected author needing to accept changing their work, perhaps including the style, to gain Lit's acceptance. With the mounting rejections over the AI-rejection rate rising, that seems an added thorn in a writer's side in publishing on Lit.

Am I also hearing your voice hinting that you might share some of those 'easy tricks that anyone can learn and make their text flow more naturally' - less like AI? Perhaps you might create a 'How to NOT Write Like an AI' submission for Lit? It would serve as a good deed and not be directly serving as an editor but a 'teacher' proffering help via another story under your belt.
 
That particular example is too absurd to expect any reasonable answer from an AI or a human even. It changes the rules in such a profound way that it creates a completely new game where knowledge about the style of play of Karpov and Kasparov plays no role. Every game would end with white mating black with his first move.

But that is a reasonable answer from a human, right there.

You recognised that this change creates a completely new game, and without having ever seen a single game played under these rules, you found the one-move mate. That's the kind of answer I'd expect from a human who understood the rules of chess. In particular, I think we can assume that both Karpov and Kasparov are capable of seeing that one-move mate.

This is a hard problem for a LLM, the kind of "AI" we're mostly discussing here. LLMs do well on problems that can be solved by applying and combining patterns they've already seen, but that's not a helpful approach for this question. Caleb was good enough to test GPT-4 on this problem, and I think it's fair to say that it did poorly.

It made some correct statements ("The changes significantly alter the dynamics of the game, especially with the enhanced mobility and invincibility of the queens") but then it produced a 36-move game that didn't incorporate this information. That game began with a standard chess opening which both Kasparov and Karpov have used many times, and which Kasparov has played several times against Karpov, but which doesn't make use of the queens' new powers, because that's what it expects a Kasparov-Karpov game to look like.

When asked "Could the white queen mate in one?" it asserted that this is impossible, giving an explanation that would have been reasonable under standard chess rules but which didn't factor in the implications of the rule changes. Only with the follow-up "What happens if queen takes d7 as the first move?" did it acknowledge this as a legal mate in one.

That was the point of that question, to highlight a difference between human capabilities and current LLMs.
 
That was the point of that question, to highlight a difference between human capabilities and current LLMs.
But one of the areas in which machine learning already leaves human abilities behind is game playing. Current LLMs by themselves can't do that, I make no predictions about future capability, but they can be provided with ML tools that can and can invoke those tools just as a human can.
 
Oh, FFS. Just tell them to publish on SOL or something.

There are no guidelines they could communicate beyond "don't use AI to help write your story". As far as we know from the rejection notices, they run every submission through AI detectors and reject them accordingly. The sole appealing process is to try, and try, and try again, until someone takes pity and publishes the story regardless or runs their already posted stories through the same AI detector and deletes them retroactively.

How about, instead of asking for some kind of clarification for months without ever getting a straight answer, let the authors implement a three-strike system? Story gets rejected three times, publish somewhere else.

I already feel like there are barely any submissions to formerly very busy categories. Once all the categories only get single-digit submissions per day, Lit will kinda have to rethink its policy.
Today, I got that message from the author I assisted in trying to overcome the AI phenomena going on with Lit. The author is moving on - leaving Lit as a writer to work on other endeavors. 🙁🫡
 
Back
Top