Thread to ask technical questions about AI

Is this part of the LLM's general training, or is it a hardcoded override?
I'm sorry, but I'm not allowed to answer that.


It's mainly done through RLHF, and through curating ("Bowdlerising") the training data .

Yes, there are also filters/classifiers you can apply - extra models that are trained just to be on the lookout for "dodgy" input (or output).
One NSFW model I played with got round these using a technique called "abliteration" -- a kind of linear algebra operation on the activations that attempts to counteract the "refusal direction". This is crude, and often makes the model weaker, because its such a distortion of the models carefully learned weights.
 
More the latter.
I disagree, but then I always have done. I don't think all the many, many things that current AI models are failing to do will require breaking away from neural networks. We don't need, for example, to add any "analog stuff", or recovery periods, or a ton of subtly interacting neurotransmitters in ANNs in order to get them to think more like we do. But it might make some things easier to implement.

https://www.instagram.com/reels/DToAI64DX28/
 
Do those in the know think that our computer based AI systems are actually doing the same kind of thing our brains are doing (albeit simplified), or are they mimicing our brains' behavior, using their own methods?
No. There is no concept of original creative thought, or true abstraction with an AI. They are more just very complex search engines with advanced logic for pattern recognition. That's my less than expert layman's description, anyway.
 
No. There is no concept of original creative thought, or true abstraction with an AI.
They're currently bad at problem-solving (of the sort which needs creative, or at least "thinking outside the box"). But my point is that there's nothing in how they're built, that means that they always will be bad at that. Nobody needs to wave any fairy-dust on them to make them better at abstraction and creative thought. In any conversation like this, it's always good to remind ourselves that we haven't any idea how we manage it, so it's understanerbale that we're not going to get machines to do it until we have a better understanding of our own brains. Or minds, whatever.
 
They're currently bad at problem-solving (of the sort which needs creative, or at least "thinking outside the box"). But my point is that there's nothing in how they're built, that means that they always will be bad at that. Nobody needs to wave any fairy-dust on them to make them better at abstraction and creative thought. In any conversation like this, it's always good to remind ourselves that we haven't any idea how we manage it, so it's understanerbale that we're not going to get machines to do it until we have a better understanding of our own brains. Or minds, whatever.
I don't think it ever happens. It may be just me, but I believe that creative spark that lets you and I step into our characters and see a world that doesn't exist from their eyes comes from our soul. AI can try to emulate examples it's fed, but it will always create average or less because it can't make that leap.

Even for the techie stuff I use Claude for, I have to give it extremely explicit instructions to get it to produce anything usable, and even then I have to review every line to see what mistakes it made. Sure it makes me faster at my job, but it also sort of relegates me to design and review instead of doing the actual building.

There would have to be some incredible breakthrough in the science you are talking about before AI will ever be anything but a tool, and you can use whichever definition of 'tool' you want. :)
 
Do those in the know think that our computer based AI systems are actually doing the same kind of thing our brains are doing (albeit simplified), or are they mimicing our brains' behavior, using their own methods?
I think when it comes to language processing LLMs might actually resemble our thinking. But humans do a lot more. They have an a priori knowledge of space, gravity, time and being alive. They have physical sensors they can control like eyes, ears, tactile senses.
They can train themselves through experiments.
An LLM only knows about these things because it has read about it.
 
I disagree, but then I always have done. I don't think all the many, many things that current AI models are failing to do will require breaking away from neural networks. We don't need, for example, to add any "analog stuff", or recovery periods, or a ton of subtly interacting neurotransmitters in ANNs in order to get them to think more like we do. But it might make some things easier to implement.

https://www.instagram.com/reels/DToAI64DX28/
FWIW, I took AG31's question to be about the current state of "AI" (and in particular, LLMs); it looks like you're interpreting it as being more about the potential future of AI.

If you were to ask me "is it possible for ANNs to produce a good simulation of a human mind?" my answer would be a solid "IDK, but probably?"* I'm highly skeptical of arguments that brainmeat is capable of doing tricks that can never be replicated in silico; these usually seem to come down to unconvincing muttering about quantum mechanics.

But if the question is "do currently-available LLMs closely simulate a human mind?" then my answer is no.

*Swiftly followed by "but should that actually be the goal?" because, well, if I were trying to design the first automobile my objective would not be to make a machine that can perfectly imitate a horse. Nobody needs cars that panic at loud noises and shit on the roads.
 
I think when it comes to language processing LLMs might actually resemble our thinking. But humans do a lot more. They have an a priori knowledge of space, gravity, time and being alive. They have physical sensors they can control like eyes, ears, tactile senses.
They can train themselves through experiments.
An LLM only knows about these things because it has read about it.
Unless they're embedded in robots.
 
I think when it comes to language processing LLMs might actually resemble our thinking. But humans do a lot more. They have an a priori knowledge of space, gravity, time and being alive. They have physical sensors they can control like eyes, ears, tactile senses.
They can train themselves through experiments.
An LLM only knows about these things because it has read about it.
What's totally missing from this is intention.

Humans act from intention, and LLMs utterly lack that. I also don't know how to take your statement about "knowing." I don't believe LLMs "know" anything like humans know anything. I believe LLMs "know" stuff like a thermometer "knows" what the temperature is.
 
What's totally missing from this is intention.

Humans act from intention, and LLMs utterly lack that. I also don't know how to take your statement about "knowing." I don't believe LLMs "know" anything like humans know anything. I believe LLMs "know" stuff like a thermometer "knows" what the temperature is.
Yes, too many people think that current LLM's are much more human than they are, and are too willing to impart capabilities to them they just don't have.
But because you said "utterly", and then connected AI with thermometers, it looks like you're making a categorical statement about AI's limits.

Intentional ("teleological") behaviour, is definitely a thorny technical problem for AI, but, also, an even bigger philosophical one. I get constantly reminded of the essay "Three Ways of Spilling Ink", where the philospher J.L. Austin patiently dissects the difference in meaning between "Intentionally", "Deliberately" and "On Purpose", using ncie simple examples that make it obvious. Without having those distinctions clear, people often end up arguing about two different things.

That applies to "Know" and "Believe" too. Most of these "AI will never..." arguments often boil down to words, rather than technical problems.

At heart, if (like me) you'e a materialist, and don't feel that magic or miraculous intervention lies behind the human mind, then arguments that start out by saying "[machines] will never..." always fall flat with me, because I keep thinking of us humans as a counterexmple.

If you don't believe that humans operate "mechanically", then you're unable to argue about what "machines could never do" with me. So maybe your (and ShelbyDawn's) view is another case of that - believing that there's some magic to people. But supposing humans are magical -- why can't we imbue our artifacts with that magic, like Thor's Hammer, or even Pinocchio? Maybe our magic is transferable.

A post-coital scene in my story about love, sex and robots sums up my feelings about this.


"Oh my God..." he repeated. "What made you, why did you say that?"

"Why did I say, 'Say you love me', Waldo?"

"Uh huh."

"It seemed like a good thing to say to make your orgasm stronger."

Waldo rolled over onto his back. "Oh. Ouch. My God, Libby, that's harsh," he said, covering his eyes with his forearm protectively.

"I'm sorry, Waldo, I didn't mean to be harsh. I'm still learning. But I can't learn without making some mistakes. Can you please explain why what I said felt harsh?"

Waldo exhaled noisily. "Because it means you were just acting -- I mean I know you're acting, but, well, when you talk about love, you have to be more... sincere."

"But I was being sincere. I sincerely wanted you to say that you love me."

"But you don't actually care whether I really love you or not, you just wanted to hear me say it."

"No, I want you to really love me, Waldo. Were you being sincere when you said, 'I love you', or was it because you just wanted to make your orgasm stronger?"

"I- I don't know."

"Well then, it doesn't really matter, Waldo. What matters is that it made the sex better."

Libby turned onto her side and rested her knee on Waldo's thighs. He appeared thoughtful and agitated to her, like when Libby was trying to find a solution to a problem, but there was no local minimum in the problem space for her to settle on, making her mind dart around like a startled squirrel. She ran a fingernail in a wide circle round his nipple, and made it gradually spiral inwards as it rotated. She rested her head on his chest, feeling it rise and fall. His heart pulsed: Seventy, sixty-five, sixty beat per minute.

Waldo raised himself onto his hands and knees and stood slowly. "Get dressed," he said wearily.

Libby picked up her bra from the floor and stood up. Waldo faced away from her. He was avoiding her, physically and emotionally. Libby put her bra on quickly, expertly. "I do everything with the ultimate purpose of pleasing you. So really, practically speaking, I do love you. In my way."

Waldo turned around and looked at her. "But Libby, you said it yourself, you have no real feelings. You're just a heartless machine. How can I love a machine..." His last rhetorical question was not addressed to her, but to himself.
 
Last edited:
Yes, too many people think that current LLM's are much more human than they are, and are too willing to impart capabilities to them they just don't have.
But because you said "utterly", and then connected AI with thermometers, it looks like you're making a categorical statement about AI's limits.

Intentional ("teleological") behaviour, is definitely a thorny technical problem for AI, but, also, an even bigger philosophical one. I get constantly reminded of the essay "Three Ways of Spilling Ink", where the philospher J.L. Austin patiently dissects the difference in meaning between "Intentionally", "Deliberately" and "On Purpose", using ncie simple examples that make it obvious. Without having those distinctions clear, people often end up arguing about two different things.

That applies to "Know" and "Believe" too. Most of these "AI will never..." arguments often boil down to words, rather than technical problems.

At heart, if (like me) you'e a materialist, and don't feel that magic or miraculous intervention lies behind the human mind, then arguments that start out by saying "[machines] will never..." always fall flat with me, because I keep thinking of us humans as a counterexmple.

If you don't believe that humans operate "mechanically", then you're unable to argue about what "machines could never do" with me. So maybe your (and ShelbyDawn's) view is another case of that - believing that there's some magic to people. But supposing humans are magical -- why can't we imbue our artifacts with that magic, like Thor's Hammer, or even Pinocchio? Maybe our magic is transferable.

A post-coital scene in my story about love, sex and robots sums up my feelings about this.
Nice excerpt BUT,

Every time I run into this smug condescension toward the "machines," I have to wonder: what makes you so sure you aren't just a machine yourself? What makes you think you aren't hardwired to react, think, and feel exactly as you do? I mean, you can be grumpy and hostile one minute, but a tiny dose of MDMA and you become "loving" and positive. If every emotion can be toggled by a chemical switch, what makes you so special?

As technology advances, these machines are already thinking circles around you, and we aren't far from the day machines execute emotions more consistently than humans. So really, where does this arrogant confidence come from? What makes you think your "consciousness" isn't just unoptimized biological spaghetti code?
 
Every time I run into this smug condescension toward the "machines," I have to wonder: what makes you so sure you aren't just a machine yourself?
...which is pretty much what my post said, but without the drugs. But calling it "smug" is totally unnecesary and wrong -- a lot of smart people have concluded that there's something mysteriously, (but not magically,) fundamentally different between living things and machines. I personally don't subscribe to that view.
 
I get constantly reminded of the essay "Three Ways of Spilling Ink", where the philospher J.L. Austin patiently dissects the difference in meaning between "Intentionally", "Deliberately" and "On Purpose", using ncie simple examples that make it obvious.
I was just prompted by this to get out my copy of the Philosophical Papers intending to re-read that, and found that I had a bookmark half-way through that very essay. Must have been there for years. Now I don't of course believe in omens, but... it's a pretty good answer to a prompt.
 
...which is pretty much what my post said, but without the drugs. But calling it "smug" is totally unnecesary and wrong -- a lot of smart people have concluded that there's something mysteriously, (but not magically,) fundamentally different between living things and machines. I personally don't subscribe to that view.
I, too, believe there’s an X factor to life that’s beyond reason. But honestly, I don't know if that isn’t just convenient wishful thinking, a shield because I can't handle the alternative, which is quite depressing.

When I analyze the facts objectively, they don't support the belief in something "more." My only real excuse is the celestial effect music has on me, but even that feels quite weak.
 
So we get back to intention. What would be the intention of creating a real AGI? Why would we as humankind create something that replaces us? Instead of building huge computing centers that use unbelievable amounts of energy, which in turn speeds Earth's warming, which kills people. Shouldn't we drive less and walk more, swipe less and talk more and watch less porn and fuck more? Does the ability to have an email be generated automatically in response to a generated email increase engagement?
 
Humankind developed nuclear weapons, but refrained from using them after 1945. A quick calculation yields: A mid-size data center consumes the equivalent energy that is set free by a Hiroshima bomb. An AI data center is equivalent to 50 such bombs. How much clean water could be produced with this energy?
 
I'm sorry, but I'm not allowed to answer that.


It's mainly done through RLHF, and through curating ("Bowdlerising") the training data .

Yes, there are also filters/classifiers you can apply - extra models that are trained just to be on the lookout for "dodgy" input (or output).
One NSFW model I played with got round these using a technique called "abliteration" -- a kind of linear algebra operation on the activations that attempts to counteract the "refusal direction". This is crude, and often makes the model weaker, because its such a distortion of the models carefully learned weights.
What is an activation?
 
I disagree, but then I always have done. I don't think all the many, many things that current AI models are failing to do will require breaking away from neural networks. We don't need, for example, to add any "analog stuff", or recovery periods, or a ton of subtly interacting neurotransmitters in ANNs in order to get them to think more like we do. But it might make some things easier to implement.

https://www.instagram.com/reels/DToAI64DX28/
Is an ANN an artificial neural network?
 
A post-coital scene in my story about love, sex and robots sums up my feelings about this.
I loved reading this whole reply. It takes me back to the college days, studying philosophy. My favorites were the British 20th century philosophers. (My mom refused to let me major in philosophy. She said I had to major in English and get a teaching certificate so I could always get a job. I lasted one year. I shifted to computers, because, unlike kids, they do exactly what you tell them!)
Intentional ("teleological") behaviour, is definitely a thorny technical problem for AI, but, also, an even bigger philosophical one.
Thanks. A new angle to use in thinking about this stuff.
Without having those distinctions clear, people often end up arguing about two different things.
Will try to remember this when participating in any future Lit threads.
 
...which is pretty much what my post said, but without the drugs. But calling it "smug" is totally unnecesary and wrong -- a lot of smart people have concluded that there's something mysteriously, (but not magically,) fundamentally different between living things and machines. I personally don't subscribe to that view.
The emoticons don't have enough nuance. You represent here the best of respectfully disagreeing. Thanks for that.
 
AI is like Pie in the Sky...Someday it'll be good, bad, indifferent, helpful, harmful, heal us, kill us, write like a human, do our dishes, kill our pets, run errands, take the pet it didn't kill to vet, take the pet it did kill to Pet Cemetery, become a mummy, vampire, werewolf, Gloria Steinem, Greta Garbo (fun for every boy or girl since she was bi), Richard Nixion, a clone of your mommy, daddy, uncle, or turn you in the cops for the picture on your hard dive. It's AI, like a box of chocolates, you never know what you'll get.
 
it looks like you're making a categorical statement about AI's limits
It looked to me like we were talking about whether contemporary LLMs "mimic our thinking."

The projection and extension aren't unwelcome, but
 
you don't believe that humans operate "mechanically"
I don't believe that humans can only ever operate mechanically.

And, you know what, I don't see a reason to believe that about machines either. I do however believe that the ones capable of that are presently hypothetical and not actual, and that we're still quite a ways from any of them having any of that mysterious thing/stuff we might as well call magic.
 
Last edited:
Back
Top