Thread
I’m scared of AGI. It's confusing how people can be so dismissive of the risks.

I’m an investor in two AGI companies and friends with dozens of researchers working at DeepMind, OpenAI, Anthropic, and Google Brain. Almost all of them are worried.

🧵
Imagine building a new type of nuclear reactor that will make free power.

People are excited, but half of nuclear engineers think there’s at least a 10% chance of an ‘extremely bad’ catastrophe, with safety engineers putting it over 30%.
That’s the situation with AGI. Of 738 machine learning researchers polled, 48% gave at least a 10% chance of an extremely bad outcome.
aiimpacts.org/2022-expert-survey-on-progress-in-ai/
Of people working in AI safety, a poll of 44 people gave an average probability of about 30% for something terrible happening, with some going well over 50%.

Remember, Russian roulette is 17%.
forum.effectivealtruism.org/posts/8CM9vZ2nnQsWJNsHx/existential-risk-from-ai-survey-results
The most uncertain part has been when AGI would happen, but most timelines have accelerated. Geoffrey Hinton, one of the founders of ML, recently said he can’t rule out AGI in the next 5 years, and that AI wiping out humanity is not inconceivable.

Here’s what others have said on the risk:

On the risk of AGI killing everyone: “So first of all, I will say, I think that there's some chance of that. And it's really important to acknowledge it.” - Sam Altman
“With artificial intelligence we are summoning the demon.” And “Mark my words — A.I. is far more dangerous than nukes” - Elon Musk
"The development of full artificial intelligence could spell the end of the human race." - Stephen Hawking
”without AI alignment, AI systems are reasonably likely to cause an irreversible catastrophe like human extinction.” - Paul Christiano (widely considered one of the top alignment researchers)
No one knows how to describe human values. When we write laws we do our best, but in the end the only reason they work at all is because their meaning is interpreted by other humans with 99.9% identical genes implementing the same basic emotions and cognitive architecture.
What happens when we try to give laws or describe values to alien minds that are vastly smarter than us? It can’t be emphasized enough that we have no idea how to do this. This isn’t just an engineering problem. We have no idea how to do this even in theory.
Meanwhile we are rapidly rushing toward AGI. Microsoft Research released a paper a few days ago titled “Sparks of Artificial General Intelligence: Early experiments with GPT-4”.
arxiv.org/abs/2303.12712
“We demonstrate that [...] GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level"
From where I'm sitting GPT-4 looks like its two paperclips and a ball of yarn away from being AGI. I don’t think anyone would have predicted a few years ago that a model like GPT-4, trained to predict TEXT, would with enough compute be able to do half the things it does.
When I first started reading about AI risk it was a weird niche concern of a small group living in the Bay Area. 10 or 15 years ago I remember telling people I was worried about AI and getting the distinct impression they thought I was a nut.
Slowly more and more credible people began to admit it was a problem. Eventually Elon read Superintelligence and wrote the tweet that launched AI risk into the mainstream:

My trust in the large AI labs has decreased over time. AFAICT they're starting to engage in exactly the kind of dangerous arms race dynamics they explicitly warned us against from the start.
It seems clear to me that we will see superintelligence in our lifetimes, and not at all clear that we have any reason to be confident that it will go well.

I'm generally the last person to advocate for government intervention, but I think it could be warranted.
I'll close with a blog post from Holden Karnofsky:
www.cold-takes.com/how-governments-can-help-with-the-most-important-century/
Since people seem to unable to restrain themselves from the obvious dunk I should add that the labs I invested in (and most of my friends) are explicitly focused on safety.
For further reading I recommend Holden Karnofsky and Paul Christiano’s blogs.

www.cold-takes.com/most-important-century/

paulfchristiano.com/
Mentions
See All
  • Post
  • From Twitter
Good thread!