On being human

Reasoning is intimately tied with task planning, with breaking tasks down into steps. Human brains have distinct areas for this (mainly in the prefrontal cortex, the "sensible, adult" part of the brain that's last to develop, in our early 20s). It's a stretch for pure current LLMs which are decoder-only" models to do reasoning, and I think we're fighting against the architecture in trying to get LLM's to do this. But, if, like me, you believe that reasoning is done by human brains, and that artifical neural neworks perform similarly to bioligical ones, then it's clear that reasoning can be done by an AI, although the best architecture for this may not be the one currently used by most LLMs.
That's roughly where I stand, and as you'd probably recall it's how it worked in my own AI story here: achieving humanlike machine intelligence required integrating a LLM-type model with something better designed for reasoning. If you were to talk to Erato about a "dog", it'd be estimating suitable feature vectors "dog" that support arithmetic like "dog – woof + meow ~= cat", but it would also be developing a conceptual model to understand the rules of "dog" more rigorously.

I'm not a meat chauvinist; I see no reason to believe that silicon is fundamentally incapable of producing the same kinds of thought as organic neurons. I do think that trying to brute-force AGI by just doing LLMs more and harder and bigger is likely to be a very expensive dead end.

I also feel like part of the reason the brute-force approach is so popular is that reasoning is hard, it can be fatiguing even for people who enjoy doing it, and many folk have figured out ways to get through life without needing to work their prefrontal cortices much. [snarky inflammatory examples pre-emptively snipped]
Some people think that AI's "learn" from their chats the way we, and other animals learn from experience, in real-time. But they don't, of course. That ability, to adapt and learn instantly, is something I added to "Libby" the AI in my story. It learned from the data coming in through its senses.
I expect you're far more up to date on the literature than me, but I seem to recall seeing some work on incremental learning for LLMs? But I'm not aware of any of the big products adopting it yet and I don't know how close it is to being cost-effective.
 
I think (haha) the fallacy is the number of people who treat LLM AI as if they're alive, "thinking", whereas my understanding is that LLMs are predictive machines, following algorithms which are ultimately ones and zeros: digital electronics. If you turn the power off, a computer is completely inert.
This is also true of a human brain. We just take our power in the form of oxygen and ATP rather than mains current.
There's no sentience at all. If there are no prompts, "questions being asked", the computer just sits there.
Current LLM products are generally reactive, but that's not a fundamental limitation of the tech. Once you have a model that predicts "what word is likely to come next?" you can easily set it to keep on producing the next word forever. It's just that mostly that's not what people want it to do. We already have James Joyce for that.

(I am not asserting that the output of a LLM running forever would be good, mind. I suspect it'd get circular pretty quickly.)
 
I think (haha) the fallacy is the number of people who treat LLM AI as if they're alive, "thinking", whereas my understanding is that LLMs are predictive machines, following algorithms which are ultimately ones and zeros: digital electronics. If you turn the power off, a computer is completely inert. There's no sentience at all. If there are no prompts, "questions being asked", the computer just sits there.

It's a variation of the Arthur C Clarke comment, that if people don't understand a technology, it looks like magic.

The issue is that someone put the terms "artificial" and "intelligence" together, instead of "fast predictive machine". No-one would be scared by the latter.
Not disagreeing, but I enjoy playing Devil's Advocate, and I think we can nuance this a bit.

Sentience is primarily about reaction and prediction (with a few very high-level animals capable of doing more reflection beyond the base consciousness of survival). The whole point of having memories and thoughts, evolutionarily speaking, is to help us react to the environment and use prediction to ensure survival. You predict which way the shark is going to swim at you so you can react and go another way. You remember where you stashed your nuts for the winter, and thus you predict they will be there when you return. It's all electrical signals and neurochemicals (unless you belief in mind/body dualism, which is a whole separate argument) for the point of reacting and, in some cases when the animal is capable of it, using predictions. Some things that are alive aren't even capable of prediction, they're purely reactionary, whereas AI is able to do higher order predictions based on calculations, reacting to the prompts in order to spit out a reply.

Granted, the prediction is (for LLMs) exclusively to pop out one word after another, and biological systems tend not to work on that grannular of a level, but it wouldn't surprise me if the human brain doesn't do this at least to some extent in cases of speech or language. I've had cases where I've "hallucinated" and said a word that kind of fit based on the previous words, but wasn't what I meant, it just happened to fit because I was on autopilot and not paying attention, like when you're using autocomplete on your phone without double-checking what word to use next.

You can also turn off humans. General anesthesia is the good example, because you're completely disrupting the brain's ability to communicate and form anything cohesive, so it's basically just noise underneath. There's no sentience because the higher-order functions that give rise to it can't perform what's necessary. You're just unconscious in the most pure, blankest state possible. If a person dies, they just sit there, no sentience. If we get to the point where we can cryogenically freeze someone and bring them back to life, that's pretty much the same as turning a computer off and then back on.

Which isn't to say that LLMs are alive or thinking, but it provides a way of looking at this from a different angle, treating living things as "fast predictive machines."
 
Last edited:
I seem to recall seeing some work on incremental learning for LLMs? But I'm not aware of any of the big products adopting it yet and I don't know how close it is to being cost-effective.
One of the most interesting ideas I've read on this is a very plausible theory of why we (and other animals) dream. It's a sort of GAN training exercise: When we dream, we generate images, and learn to distinguish the "real" ones arising out of the previous days' experience, from the fake dream images. That makes a lot of sense to me -- my own dreams often feature distorted versions of my days' experience. So, rather than real-time, stochastic learning, we build a batch of data every day, and sift though it using GAN training every night.

Realtime, "stochastic" learning has been around as long as AI -- the data is constantly coming in in a stream -- this is very useful for calibrating a model to contantly varying data, where the distibution is drifting. That's used a lot in finance models, to handle the contsantly changing market conditions. Ive used it myself in a "gaze recognition" system that needs to contsantly adapt to changing poses, distances and lighting.

LLMs are too big to make realtime full-scale learning feasible at the moment -- but they can be continuously adapted using gentle fine-tuning. Because we don't know what's going on inside them, the danger we face if we try to "tweak them" with new information is that they start to forget other things, eventually collapsing in a heap of confusion.
 
many folk have figured out ways to get through life without needing to work their prefrontal cortices much

I had a very frightening experience a few years ago, when I was fighting an addiciton: A complete breakdown of communucation between my prefrontal cortex (PFC) and my limbic system. It felt exactly like Hikikimori -- part of me locked itself away and refused to communicate with my sensible, parental PFC. It was the closest Ive ever come to a "split personality". I would "watch myself" as I went through my addictive activities, and "bang on the door", screaming at myself to stop. But the other side of me wouldn't listen.

The cure came from an excellent hypontherapist: She made me understand that there was no conflict between those two sides of me: Both parts of my brain had my best interests at heart, both of them were trying to push my behaviour in ways that would make me feel better and happier. After that realization, my PFC stopped being such a bully towards my addictive side, and the addict in me started to listen to reason.
 
Last edited:
Back
Top