Thread
The evening slot outside @DeepMind in London calling for a global moratorium on the development of AI systems more powerful than GPT-4. We had some great discussions with DeepMind employees leaving the building - many share at least some of our concerns. Read on to learn why!
Because CEO of OpenAI, Sam Altman, has written that the "development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."
Because Geoffrey Hinton, known as "a godfather of AI", has said that AI "is an existential risk" and that "there are very few examples of a more intelligent thing being controlled by a less intelligent thing. It’ll figure out ways of manipulating people to do what it wants."
Because 48% of AI experts responding to a 2022 survey said that the probability of the long-run effect of AI on humanity being "extremely bad (e.g. human extinction)" was 10% or higher.
Because co-founder of DeepMind, Shane Legg, has said "if we can build human level, we can almost certainly scale up to well above human level... We have almost no idea how to deal with this."
Because historian Yuval Noah Harari has written "social media was the first contact between AI and humanity, and humanity lost... [the AI] was sufficient to create a curtain of illusions that increases societal polarization, undermined our mental health and unraveled democracy."
Because AI labs like @DeepMind are under immense pressure from their shareholders to work on capabilities, not safety. Max Tegmark, machine learning researcher at MIT, has said "without the public pressure, none of them can do it [pause giant AI experiments] alone."
Two (paraphrased!) things that DeepMind employees have said to me this evening:

"I don't agree with everything you say, but I'm glad you're here because we need to talk about safety more."

"I think a global moratorium is a good idea, I just don't know how to make it happen."
Mentions
There are no mentions of this content so far.