Post
Zain Hasan @ZainHasan6 ยท Nov 21, 2023
  • From Twitter

The most clearest and crisp explanation, I've ever heard, of how large language models compress and capture a "world-model" in their weights simply by learning to predict the next word accurately. Furthermore, how the raw power of these base models can then be tamed by teaching them to follow instructions from humans.

Replies
No replies yet