upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Distributed Representations: Composition & Superposition

  • Article
  • May 4, 2023
  • #ArtificialIntelligence #Neuroscience
Chris Olah
@ch402
(Author)
transformer-circuits.pub
Read on transformer-circuits.pub
1 Recommender
1 Mention
Distributed representations are a classic idea in both neuroscience and connectionist approaches to AI. We're often asked how our work on superposition relates to it. Since publishi... Show More

Distributed representations are a classic idea in both neuroscience and connectionist approaches to AI. We're often asked how our work on superposition relates to it. Since publishing our original paper on superposition, we've had more time to reflect on the relationship between the topics and discuss it with people, and wanted to expand on our earlier discussion in the related work section and share a few thoughts. (We care a lot about superposition and the structure of distributed representations because decomposing representations into independent components is necessary to escape the curse of dimensionality and understand neural networks.)

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Neel Nanda (at ICLR) @NeelNanda5 · May 4, 2023
  • Post
  • From Twitter
Great new work from @ch402 on the differences between representing activations with directions for independently varying features vs a dense yet structureless code. I really appreciate the commitment to writing great exposition!
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta