Bramblethorn
Sleep-deprived
- Joined
- Feb 16, 2012
- Posts
- 19,349
That's roughly where I stand, and as you'd probably recall it's how it worked in my own AI story here: achieving humanlike machine intelligence required integrating a LLM-type model with something better designed for reasoning. If you were to talk to Erato about a "dog", it'd be estimating suitable feature vectors "dog" that support arithmetic like "dog – woof + meow ~= cat", but it would also be developing a conceptual model to understand the rules of "dog" more rigorously.Reasoning is intimately tied with task planning, with breaking tasks down into steps. Human brains have distinct areas for this (mainly in the prefrontal cortex, the "sensible, adult" part of the brain that's last to develop, in our early 20s). It's a stretch for pure current LLMs which are decoder-only" models to do reasoning, and I think we're fighting against the architecture in trying to get LLM's to do this. But, if, like me, you believe that reasoning is done by human brains, and that artifical neural neworks perform similarly to bioligical ones, then it's clear that reasoning can be done by an AI, although the best architecture for this may not be the one currently used by most LLMs.
I'm not a meat chauvinist; I see no reason to believe that silicon is fundamentally incapable of producing the same kinds of thought as organic neurons. I do think that trying to brute-force AGI by just doing LLMs more and harder and bigger is likely to be a very expensive dead end.
I also feel like part of the reason the brute-force approach is so popular is that reasoning is hard, it can be fatiguing even for people who enjoy doing it, and many folk have figured out ways to get through life without needing to work their prefrontal cortices much. [snarky inflammatory examples pre-emptively snipped]
I expect you're far more up to date on the literature than me, but I seem to recall seeing some work on incremental learning for LLMs? But I'm not aware of any of the big products adopting it yet and I don't know how close it is to being cost-effective.Some people think that AI's "learn" from their chats the way we, and other animals learn from experience, in real-time. But they don't, of course. That ability, to adapt and learn instantly, is something I added to "Libby" the AI in my story. It learned from the data coming in through its senses.