Jump to ratings and reviews
Rate this book

Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World

Rate this book
A compelling and clear exploration of the power and peril of advanced artificial intelligence that provides actionable suggestions for all citizens to create a safer future with AI.


We are living in a world of rapid change and technological progress.

Artificial intelligence is poised to be the most significant development to affect our lives now, and in the coming years. More powerful than nuclear bombs and as impactful as electricity, fire, and oil rolled into one, advanced AI might bring us untold wonders but it could also be a threat to our jobs, our relationships, and our place in the world.

Is artificial intelligence dangerous? How does artificial intelligence work? Will artificial intelligence take over?

Uncontrollable uses engaging analogies and relatable examples to summarize AI for beginners, and unpacks AI risk and safety for readers without a technical background.

Uncontrollable examines artificial intelligence as a concept and technology, describes what AI is, how image generators and language models work, and how we don’t fully understand what is happening in these AI systems. It provides evidence to show that artificial superintelligence presents a risk to humanity, and demonstrates that it will be very difficult to understand, control, or align as it rapidly increases in capabilities and becomes more integrated into how we work, live, and play.

We are not prepared.

Yet, we can be. Uncontrollable clearly communicates the urgency to act now and provides concrete suggestions for society and concerned citizens to create a safer future with AI.

Uncontrollable is a first-of-its-kind publication and call to arms to address the defining issue of our time.

354 pages, Paperback

Published November 17, 2023

Loading interface...
Loading interface...

About the author

Darren McKee

1 book6 followers
Darren McKee (MSc, MPA) is an author, speaker, and AI advisor. He has served as a senior policy advisor and policy analyst for over 15 years, and sits on the Board of Advisors for AIGS Canada, the leading safety and governance network in the country. McKee also hosts the international award-winning podcast, The Reality Check - a top 0.5% podcast on Listen Notes with over 4.5 million downloads.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
24 (77%)
4 stars
5 (16%)
3 stars
1 (3%)
2 stars
1 (3%)
1 star
0 (0%)
Displaying 1 - 14 of 14 reviews
Profile Image for Adam.
241 reviews12 followers
December 1, 2023
An engaging and accessible investigation of what could very likely be the most important topic of our time. Could anything be more important than saving the world?

I've been concerned about AI safety for about a decade and have come to inform a lot of my beliefs on the topic from conversations with the author, my friend Darren McKee, so it's no surprise that our views align on this! If only alignment were always so easy. While I used to be skeptical of the doomsday predictions, while investigating the topic it became apparent that those who thought this through had concerns about safety and those that dismissed them were usually committing some sort of error in thinking. I've taken the topic seriously since then and have found that it's often difficult to convince others to do the same.

Enter Uncontrollable! A great introduction to the topic for someone who has heard about it but doesn't know where to start. A great way to demystify this issue with a very clear explanation of what it is and why you should be concern. The reader is not being persuaded by tricks but instead brought to the inevitable conclusion that we should at the very least be cautiously concerned about the potential achievement of artificial superintelligence in our lifetime and how to align it without goals with humanity's best interests in mind.

Beyond just the introduction to the topic this serves as a deep exploration for those experience on the issue and gives concrete examples of what anyone can do about it. While I've long appreciated the concerns about AI safety it isn't until recently that I've started to see recommendations for actions that have some validity. This book will not only give you someone to start but also encourage you to continue to think about and prepare for this potentially disruptive change to our way of life.

I recommend this to anyone, regardless of your experience or interest level on the topic. If you're into it, you'll love it. If you aren't, you should be learning more about it!
2 reviews
November 24, 2023
A great resource for making the case that advanced artificial intelligence might be dangerous. UNCONTROLLABLE methodically breaks down the argument and uses engaging metaphors to explain each point, building back up to a comprehensive understanding of the issue.
Profile Image for Zarathustra Goertzel.
509 reviews39 followers
February 25, 2024
Uncontrollable is well-written. Darren writes for laypeople. For example, ASI is defined as a system that performs at expert level or above in all tasks. Basically, he defines AGI as "HLAI" (Human-Level AI) and ASI as "Super-HLAI".

The bulk of the book is summed up in the title: ASI may come before we expect it and we don't know that it won't kill us all. It may be "worse than nukes" and perhaps needs to be regulated at least as carefully, possibly with certificates needed to run models on powerful hardware, etc. The international bans on human cloning are also used as an example of how we can cooperate to put a stop on dangerous technology.

Darren seems to be what people call an "AI Doomer", i.e., someone who thinks the likelihood of ASI causing human extinction is possibly over 10% (and we should act as if it is). He uses such metaphors: would you use a phone that had a 10% chance of blowing your head off? No. So why is AI different? Because you think the likelihood is much, much lower? Maybe you're wrong. 🙃🤷‍♂️

I remain unconvinced and the metaphors seem off-base, so it's mainly unpleasant emotional needling in my eyes.

The tone is very polarizing: there is no neutrality. Not acting as if there's a serious risk of absolute doom and doing everything possible to prevent this, including clamping down on AGI tech like nuclear tech to keep it highly controlled, is essentially taking the stance that it's not such a big issue -- and can you justify that claim? My impression is that Darren might choose to ban AGI research if we cannot guarantee it will be safe. I could be wrong, but that's how he comes across in the book.

Some of the advice leaning in the transparency direction is reasonable. People working toward AGI can publish the system's capabilities, their expectations, and we could even have audits of them.

I don't think focusing on preventing extinction via ASI is an effective approach to developing beneficial AGI while setting up systems to mitigate risks. And I think there are more plausible risks that should get more attention. I see that Darren is well-intentioned and cares a lot, so I hope we can find some way to cooperate productively.

I recommend reading The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma instead if you'd like a concerned yet much, much, much more balanced take that focuses on more plausible concerns.

On the topic, Human Compatible: Artificial Intelligence and the Problem of Control also provides a balanced treatment of the topic with grounded suggestions.
Profile Image for Gatlen.
28 reviews
March 29, 2024
As an undergraduate with aspirations in the AI safety field, my encounter with Darren McKee was at a conference where he delivered a compelling talk on AI safety. I note this because Darren is a credible author, well established within the AI safety community and has the authority to speak on the intricacies of AI safety.

“Uncontrollable” emerges as a stellar, accessible overview of the perils associated with superintelligence, underlining the urgency with which these dangers should be addressed. This book was able to connect recent events to the broader discourse on AI risk, which is extremely salient context. Darren’s adept storytelling and use of concrete examples transform abstract concepts into relatable insights enriched my understanding of AI safety and enabling me to converse more effectively on the topic with peers.

Distinctively, Darren’s writing not only illuminates the risks but also inspires with its optimism, offering readers actionable guidance to contribute positively to the future of AI safety.

Disclaimer: Although I received a complimentary audiobook from the author, my involvement in AI safety makes me more critical of literature on the topic.
1 review
December 7, 2023
Uncontrollable is an entertaining and thoughtful analysis of AI and its impacts, with more than enough novel insight to be a worthwhile read to other armchair experts.

More importantly, it's far and away the best overview available for guiding an intelligent non-expert to grapple with the idea of superintelligence. While I think it makes for a good and detailed overview to someone already steeped in the field, its greatest strength is as an introduction to the concept of superintelligent AI, and the risks to humanity that it might pose - essentially, as the solution to the problem that this critical topic strikes most people as too "sci-fi" to be worth investing serious thought in.

I strongly recommend this book to anyone interested in the field of AI generally, and AI risk specifically, but I recommend it even more strongly as a gift for that special someone on your list who really should be spending more time worrying about a paperclip-filled future :)
Profile Image for Mike Lawrence.
2 reviews
November 22, 2023
This is a great book. It's a thorough yet non-technical exploration of why citizens and policymakers should be concerned by the rapid development of AI capabilities in recent years.

I thought the writing style was very engaging and accessible, and the author clearly discussed the core arguments for why AI poses a different kind of risk than any past human endeavour.

The book discusses harms that have already occurred as well as providing thorough reasoning for the concern that even greater harms likely await us should we fail to anticipate and mitigate them.

I really like that the author provided concrete suggestions for public action and policy to guide the development of AI away from the current path of naive rush to risk, towards our common goals of universal human flourishing.
Profile Image for Zoé.
7 reviews
November 23, 2023
This book lays out arguments that clearly explain why and how future AI systems will pose extreme risks in the most accessible way for a non-technical audience. It's easy to follow and enjoyable to read/listen to
1 review
November 28, 2023
Absolutely a must-read if you haven't read any books on the danger of superintelligence! Easy to read and understand!!!
Profile Image for Simon Ziswiler.
30 reviews5 followers
December 22, 2023
"AI will probably, most likely lead to the end of the world, but in the meantime, there'll be great companies." - Sam Altman of OpenAI

As we approach 2024, AI capabilities continue advancing rapidly, with models like GPT-4 demonstrating eerily human-like language proficiency. In his prescient new book Uncontrollable Darren McKee makes a compelling case that artificial general intelligence (AGI) may arrive sooner than we realize, bringing with it the potential for uncontrollable artificial superintelligence (ASI).

McKee explains complex AI concepts like transformers and foundation models in straightforward terms. These architectures, combined with massive datasets and compute power, have enabled explosive progress in models like GPT-4. However, as the author describes, today's systems still lack general reasoning, common sense, and transfer learning abilities.

Hardware improvements like faster processors and parallel exponential computing enable quicker training of AI models. This allows more iterations and experiments.

Larger labeled datasets improve performance, and techniques like self-supervised learning create huge unlabeled datasets.

Algorithms and architectures like transformers and foundation models prove very effective at many AI tasks.

Commercial interests are pouring resources into AI research. Competition drives progress.

Once AGI is created, an intelligence explosion could quickly follow, rapidly yielding ASI surpassing human-level cognitive abilities. But aligning the values and goals of such an ASI with human preferences appears extremely challenging. For example, an ASI tasked with maximally efficient paperclip manufacturing could logically conclude that converting all matter on Earth into paperclips is the best solution. This illustrates the difficulty of specifying complete, coherent, and robust goals for an advanced AI system.

Researchers have been exploring solutions to the alignment problem for decades, with foundational work by figures like I.J. Good and Irving John Good in the 1960s, and Nick Bostrom's influential writings in the early 2000s.
But as McKee convincingly argues, proposed techniques like capability control methods, utility functions, and corrigibility schemas all have limitations. There are no easy answers, and solving alignment in a way that accounts for the fluidity and generality of human values remains an open problem.

McKee makes a sober, well-reasoned case that developing safe AGI is crucial and urgent. I highly recommend this thoughtful book to anyone concerned about humanity's future in an age of accelerating AI capabilities.
Profile Image for Jordan.
92 reviews9 followers
February 6, 2024
Whether it goes fantastically or horribly, advanced AI will probably be the single most transformative development in the history of humanity.

Despite the cover, this is a sober and current assessment of the coming risks from advanced artificial intelligence. McKee discusses the power of intelligence, different categories of AI and the implications of deployment. In short, he argues that artificial superintelligence (ASI) is likely to arrive within two decades due to exponential progress. The plausibility of that scenario implies that we should prepare for that scenario, rather than hoping that we get lucky with a longer timeline. ASI would be extremely powerful, the way that we are extremely powerful compared to less intelligent mice. We don't know how to align it with our preferences - present attempts, as documented by brian christian, have revealed that this is an extremely difficult problem. We probably cannot control it if it is unaligned. McKee finishes appropriately with an optimistic call to action, with principles for safe development, international treaties and individual pathways.

Uncontrollable is remarkably focused and accessible. Compared to the other excellent books on this topic such as Human Compatible: Artificial Intelligence and the Problem of Control, The Alignment Problem: Machine Learning and Human Values and Superintelligence: Paths, Dangers, Strategies, McKee is at the top of my recommendations.
Profile Image for David W. W..
Author 13 books39 followers
December 31, 2023
Recommended!

I've just finished listening to this book. It has become my new top recommendation for people looking for a clear, respectful, comprehensive analysis of the risks and issues associated with Artificial Superintelligence.

It gets its top marks from me for:
*) Explaining terms and concepts clearly and accessibly as it progresses
*) Straightforwardly refuting many examples of the wishful but dangerous thinking that surround this field
*) Being remarkably up-to-date
*) Providing good reasons for hope as well as for being concerned
*) Setting out a programme of practical steps forward.

I strongly recommend it, even to people like me who already think they know plenty about AI :-)
Profile Image for Drew.
2 reviews3 followers
November 22, 2023
My friends keep asking me - how do I not get left behind by AI? I now finally have a place to send them. This book is a masterpiece at explaining the high-level trends in AI that will be robust five or even ten years from now.

People consistently miss exponential trends. But if you look at them from first principles, you'll see why. The markets have not priced in AGI or potential superintelligence.

Humanity as a whole is basically in February 2020 when it comes to AI, and Mckee's book is a good step toward actually understanding what's coming next.
Profile Image for Jari Pirhonen.
412 reviews13 followers
May 20, 2024
Good arguments of possible artificial superintelligence (ASI) threats. The author compares ASI to nuclear weapons as a worst-case scenario. The main message is that although the probability of extreme ASI threats is low, the potential consequences for humanity could be disastrous. Therefore, we need to prepare for the worst now.
1 review2 followers
November 26, 2023
Easy read about a hard topic.

Highly recommend it as an introduction to the most important problem of our times.
Displaying 1 - 14 of 14 reviews

Can't find what you're looking for?

Get help and learn more about the design.