Sign in to confirm you’re not a bot
This helps protect our community. Learn more

Episode Preview

0:00

Jaan's impressive entrepreneurial career and his role in the recent AI Open Letter

1:30

AI safety and Future of Life Institute

3:26

Jaan's first meeting with Eliezer Yudkowsky and the founding of the Future of Life Institute

6:55

Future of AI evolution

13:00

Sponsor: Omneky

15:55

Jaan's investments in AI companies

17:20

The emerging danger paradigm

24:22

Economic transformation with AI

28:10

AI supervising itself

33:48

Language models and validation

35:23

Evolution, useful heuristics, and lack of insight into selection process

40:06

Current estimate for life-ending catastrophe

43:13

Inverse scaling law

46:09

Our luck given the softness of language models

54:20

Future of Language Models

56:24

The Moore’s law of mad science

1:01:00

GPT-5 type project

1:03:02

The AI race dynamics

1:09:00

AI alignment with the latest models

1:11:00

AI research investment and safety

1:14:31

What a six month pause buys us

1:21:00

AI’s Turing Test Passing

1:27:01

AI safety and risk

1:29:33

Responsible AI development.

1:33:18

Neuralink implant technology

1:41:20
Pausing the AI Revolution? With Technologist Jaan Tallinn
58Likes
1,954Views
2023Apr 13
Nathan Labenz dives in with Jaan Tallinn, a technologist, entrepreneur (Kazaa, Skype), and investor (DeepMind and more) whose unique life journey has intersected with some of the most important social and technological events of our collective lifetime. Jaan has since invested in nearly 180 startups, including dozens of AI application layer companies and some half dozen startup labs that focus on fundamental AI research, all in an effort to support the teams that he believes most likely to lead us to AI safety, and to have a seat at the table at organizations that he worries might take on too much risk. He's also founded several philanthropic nonprofits, including the Future of Life Institute, which recently published the open letter calling for a six-month pause on training new AI systems. In this discussion, we focused on:
  • The current state of AI development and safety
  • Jaan's expectations for possible economic transformation
  • What catastrophic failure modes worry him most in the near term
  • How big of a bullet we dodged with the training of GPT-4
  • Which organizations really matter for immediate-term pause purposes
  • How AI race dynamics are likely to evolve over the next couple of years
Also, check out the debut of co-host Erik's new long-form interview podcast Upstream, whose guests in the first three episodes were Ezra Klein, Balaji Srinivasan, and Marc Andreessen. This coming season will feature interviews with David Sacks, Katherine Boyle, and more. Subscribe here:    / @upstreamwitheriktorenberg   LINKS REFERENCED IN THE EPISODE: Future of Life's open letter: https://futureoflife.org/open-letter/... Eliezer Yudkowsky's TIME article: https://time.com/6266923/ai-eliezer-y... Daniela and Dario Amodei Podcast: https://podcasts.apple.com/ie/podcast... Zvi on the pause: https://thezvi.substack.com/p/on-the-... TIMESTAMPS: (0:00) Episode Preview (1:30) Jaan's impressive entrepreneurial career and his role in the recent AI Open Letter (3:26) AI safety and Future of Life Institute (6:55) Jaan's first meeting with Eliezer Yudkowsky and the founding of the Future of Life Institute (13:00) Future of AI evolution (15:55) Sponsor: Omneky (17:20) Jaan's investments in AI companies (24:22) The emerging danger paradigm (28:10) Economic transformation with AI (33:48) AI supervising itself (35:23) Language models and validation (40:06) Evolution, useful heuristics, and lack of insight into selection process (43:13) Current estimate for life-ending catastrophe (46:09) Inverse scaling law (54:20) Our luck given the softness of language models (56:24) Future of Language Models (1:01:00) The Moore’s law of mad science (1:03:02) GPT-5 type project (1:09:00) The AI race dynamics (1:11:00) AI alignment with the latest models (1:14:31) AI research investment and safety (1:21:00) What a six month pause buys us (1:27:01) AI’s Turing Test Passing (1:29:33) AI safety and risk (1:33:18) Responsible AI development. (1:41:20) Neuralink implant technology TWITTER: @CogRev_Podcast @labenz (Nathan) @eriktorenberg (Erik) Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off. More show notes and reading material released in our Substack: https://cognitiverevolution.substack.... Music Credit: OpenAI's Jukebox
The Cognitive Revolution: How AI Changes Everything
Cognitive Revolution "How AI Changes Everything"

Follow along using the transcript.

Cognitive Revolution "How AI Changes Everything"

38.1K subscribers