On the highway towards Human-Level AI, Large Language Model is an off-ramp.
To clarify:
LLMs that auto-regressively & reactively predict the next word are an off-ramp. They can neither plan nor reason.

But SSL-pretrained transformers are clearly a component of the solution, within a system that can reason, plan, & learn models of the underlying reality.
My proposal for an architecture that reason, plan, and learn models of reality.


Why learning from text is insufficient for intelligence.
But this is not to say that LLMs in their current form are not useful. Or fun.
They are.

Recommended by
Recommendations from around the web and our community.

Important thread.