Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if the problem that LLMs solved was not the lack of "intelligence" of logic driven systems but the lack of a particular intelligence that is so crucial for is to make effective use of the interaction with such tools, namely the ability to actually understand our natural language.

That feature by itself is not enough, but can be a very effective glue to be used with other components of an intelligent system. The analogy with the human brain would be the broca area vs. the rest of the brain.

Now, there are open questions about whether the _architecture_ that underpins the LLMs is also good enough to be used as a substrate for other functions and what's the most effective way for having these different components of the system communicate between each other.

The analogy with the human brain can guide us (as well as lead us astray), in that our brain, like biological systems often do, re-purposes the basic building blocks to create different subsystems.

It's not clear to me at which level we'll find the most effective re-purposable building blocks.

It's easy to try (and people do) to use the top-level LLM system as such a building block and have it produce plans, connect it to external systems that feed information back and have it iterate again on it (ab)using it's language processing as an API with the environment.

The human analogy of that is when we use external tools to extend our cognitive capacity, like when we do arithmetic using pencil and paper or when we scribble some notes to help us think.

I think this level is useful and real but I wonder if we also need to give more power to some lower levels too.

Granted, some of that "power" can already be emerging during the training of the LLMs but I wonder if some more specialized blocks might enhance the effectiveness



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: