upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Notes on Existential Risk from Artificial Superintelligence

  • Article
  • Sep 18, 2023
  • #ArtificialIntelligence
Michael Nielsen
@michael_nielsen
(Author)
michaelnotebook.com
Read on michaelnotebook.com
1 Recommender
1 Mention
Earlier this year I decided to take a few weeks to figure out what I think about the existential risk from Artificial Superintelligence (ASI xrisk). It turned out to be much more di... Show More

Earlier this year I decided to take a few weeks to figure out what I think about the existential risk from Artificial Superintelligence (ASI xrisk). It turned out to be much more difficult than I thought. After several months of reading, thinking, and talking with people, what follows is a discussion of a few observations arising during this exploration, including:

Three ASI xrisk persuasion paradoxes, which make it intrinsically difficult to present strong evidence either for or against ASI xrisk. The lack of such compelling evidence is part of the reason there is such strong disagreement about ASI xrisk, with people often (understandably) relying instead on prior beliefs, self-interest, and tribal reasoning to decide their opinions.
The alignment dilemma: should someone concerned with xrisk contribute to concrete alignment work, since it's the only way we can hope to build safe systems; or should they refuse to do such work, as contributing to accelerating a bad outcome? Part of a broader discussion of the accelerationist character of much AI alignment work, so capabilities / alignment is a false dichotomy.
The doomsday question: are there recipes for ruin -- simple, easily executed, immensely destructive recipes that could end humanity, or wreak catastrophic world-changing damage?
What bottlenecks are there on ASI speeding up scientific discovery? And, in particular: is it possible for ASI to discover new levels of emergent phenomena, latent in existing theories?

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Ash Jogalekar @curiouswavefn · Sep 19, 2023
  • Post
  • From Twitter
Will take me a while to digest all the points made in this expansive essay, but really excellent work, Michael.
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta