Discussion of the philosophy and morality of the intersection of cloning and artificial life and sex

https://zipdo.co/virginity-age-statistics/

The global median age is 17.4 years. The United States median age is 17.8 years. A median below 18 means that an absolute majority lose their virginity below the age of 18.

The rules are the rules because bad apples spoil the bunch, not because it is an accurate depiction of sexual activity.
Thanks for digging that out :) Those figures are much closer to what I was expecting than your first indications.
Basically, 18 still seems like a suitable limit when there needs to be one.
 
I guess 17.8 doesn't surprise me that much. I would have guessed probably a year older as a median.

In terms of story writing, it's not hard to turn 17.8 into 18.1 without changing the story notably, meaning a majority of cases are easily describable.

And the rule avoids enormous problems while ruling out some interesting stories.
 
You're already in the realm of far enough scifi to do what you want. You could make the clones be implanted with a ready made conciousness copied from a real but dead human, memories wiped and generic ones implanted along with skills needed for sexfights/whatever you need. So mind and body are over 18 and you can play with the skillsets as an optional extra.
 
The problem is, it's wrong to have sex (and while we're at it, do lots of other things) with someone who can't meaningfully consent to it, and the vast majority of children are too lacking in life experience, mental development, and personal autonomy to meaningfully consent, especially not when the other party is an adult. They don't know what they're doing and they aren't used to making decisions for themselves and others with long-term consequences, and adults can directly and indirectly hold all kinds of things over their heads. It's generally culturally and morally acceptable for parents or other authorities to make medical and financial decisions for teenaged and younger children, on the assumption that they don't know what they're doing yet. For sexual decisions involving an adult and a child, the assumption is simply "no."
In reading the OP's initial question, the concept of consent is what popped up for me. Regarding the age thing, to me, it seems like they are cloned adults, so the fact that they were recently created isn't a factor. However, what is a factor is their knowledge and ability to make decisions for themselves, rather than have someone else make those decisions and hope they are good ones.
 
In terms of morality, I think that we can all agree that it is morally wrong to enslave people and force them to have sex fights for our amusement like a hard-X version of Django Unchained. We're way past discussions of meaningful consent, we're talking about vat grown slaves being forced into public sex fights. The immorality of the situation is a marker of it being dystopian science fiction. The purpose of the story is sexual fantasy and social commentary.

Which means that the only really important thing is the site rules about what can and cannot be posted. The site has a zero tolerance policy on underage sex, which extends to characters who are underage in some important axis and even characters who pretend to be underage. These rules exist because Literotica does not want to be in a position where even a quote clipped out-of-context could seem to show them hosting child abuse content.

These rules are not about ethics. They are about keeping the site out of legal trouble. Is underage sex a thing that is valid in abstract to discuss through the social commentary lens of dystopian science fiction? Absolutely. In the real world, child marriage is still legal in 34 states and silence on that issue is a tacit endorsement of that status quo. However, Literotica stories are not the place to have those kinds of discussions, because Literotica is a porn site that is desperately trying to avoid the Eye of Sauron.

Stories on this site involving "created" sex partners usually have robots or golems that have adult appearance, intelligence, and mannerisms loaded into them from the jump. And that's because they have to check all the boxes of being equivalent to an adult human to meet the inflexible rules of the site. A "rapidly grown" biological sex partner has to still be basically that because the rules that the sex robots and sex golems fall under still apply. At which point they aren't really different from the robots and it doesn't really matter that they are technically grown at all.

And that's why I would still suggest sending the catgirl slaves to catgirl school. It affords you an opportunity to make social commentary about an educational system whose literal purpose is to create playthings for corporate overlords, but importantly it makes it unambiguous that your catgirls are genuinely over the arbitrary age that all the characters have to be in order to be allowed to be posted on this site.
 
The really interesting discussion will come when somebody starts tries to make the case that such robots (ok, any robots with a given level of intelligence and such) are in fact sentient individuals, ones deserving of legal recognition.
That discussion has already begun, e.g. https://yalelawjournal.org/forum/the-ethics-and-challenges-of-legal-personhood-for-ai

I have mixed feelings about it. On the one hand it's an interesting philosophical question and one that we would definitely want to consider if we were close to artificial general intelligence.

On the other hand, I think currently it's being promoted as a bit of a smokescreen. It draws attention away from less glamorous conversations of much more pressing importance, like "what do all these new LLM data centres mean for water supply?" or "how should we be treating the humans who are part of this industry?", and redirect it in directions less likely to result in inconvenient restrictions, and it gets potential customers fixated on "AIs" as some kind of human-level intelligence rather than thinking about the ways in which current "AI" products fall drastically short of that mark.
 
That discussion has already begun, e.g. https://yalelawjournal.org/forum/the-ethics-and-challenges-of-legal-personhood-for-ai

I have mixed feelings about it. On the one hand it's an interesting philosophical question and one that we would definitely want to consider if we were close to artificial general intelligence.

On the other hand, I think currently it's being promoted as a bit of a smokescreen. It draws attention away from less glamorous conversations of much more pressing importance, like "what do all these new LLM data centres mean for water supply?" or "how should we be treating the humans who are part of this industry?", and redirect it in directions less likely to result in inconvenient restrictions, and it gets potential customers fixated on "AIs" as some kind of human-level intelligence rather than thinking about the ways in which current "AI" products fall drastically short of that mark.
First of all, thanks for the citation. I'm teaching a course in the spring on the ethics and social impact of computing techniologies, where I make student pick topics and provide readings for the otters students (with prior approval by me). Some need help to find reasonable resources.

Beyond this, I agree whole heartedly with you smokescreen argument.
 
First of all, thanks for the citation. I'm teaching a course in the spring on the ethics and social impact of computing techniologies, where I make student pick topics and provide readings for the otters students (with prior approval by me). Some need help to find reasonable resources.
DAIR might provide some interesting food for thought: https://www.dair-institute.org/

I think I saw this via DAIR or somebody adjacent to them - an example of the "right now" issues that deserve a bit more oxygen than they're getting. It has potential for story inspiration here, though it'd be a pretty bleak kind of story: https://data-workers.org/wp-content/uploads/2025/12/The-Emotional-Labor-Behind-AI-Intimacy-1.pdf
 
Thank you for the answer, and I will see if I can refine the post and figure out a different part of the forum to share it in to approach it from a more general 'morality/philosophy' angle.

I'll say this as a writer born and raised in the cyberpunk genre: leave the morality/philosophy discussion of your proposal to your story, or future stories you'll make within that microcosmos. In doing so you'll enrich your story far much more than keeping it outside of it.
 
Assuming that your clones/robots/etc have free will and the ability to make choices, you don't have any of these issues.
You get into Douglas Adams territory. What if an animal is genetically designed to want to be slaughtered and eaten?

My heroine, Nix, has the basic underpinnings of a sexbot, but escapes slavery (not that her actual slavery was that unpleasant) and begins to take her own decisions. Then she constantly agonizes about whether what she thinks or feels is ‘real’ or ‘programming.’ E.g. is she really sex positive, or is that her underlying sexbot code? But that’s a very human state of being.
 
Back
Top