Jump to ratings and reviews
Rate this book

The Precipice

Rate this book
This urgent and eye-opening book makes the case that protecting humanity's future is the central challenge of our time.

If all goes well, human history is just beginning. Our species could survive for billions of years - enough time to end disease, poverty, and injustice, and to flourish in ways unimaginable today. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, where we face existential catastrophes - those from which we could never come back. Since then, these dangers have only multiplied, from climate change to engineered pathogens and artificial intelligence. If we do not act fast to reach a place of safety, it will soon be too late.

Drawing on over a decade of research, The Precipice explores the cutting-edge science behind the risks we face. It puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time. And it points the way forward, to the actions and strategies that can safeguard humanity.

An Oxford philosopher committed to putting ideas into action, Toby Ord has advised the US National Intelligence Council, the UK Prime Minister's Office, and the World Bank on the biggest questions facing humanity. In The Precipice, he offers a startling reassessment of human history, the future we are failing to protect, and the steps we must take to ensure that our generation is not the last.

480 pages, Hardcover

First published March 5, 2020

Loading interface...
Loading interface...

About the author

Toby Ord

3 books234 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
1,425 (36%)
4 stars
1,454 (37%)
3 stars
790 (20%)
2 stars
199 (5%)
1 star
46 (1%)
Displaying 1 - 30 of 496 reviews
Profile Image for Stefan Schubert.
19 reviews88 followers
February 17, 2020
This book conveys the Future of Humanity Institute world-view; the result of 15 years of research. It's a whole new way of looking at the world and what our priorities should be. The book is meticulously argued, rich in facts and ideas, surprisingly accessible, and beautifully written.
Profile Image for Matthew.
Author 1 book41 followers
March 14, 2021
I think I missed the point of this book. For a book that seems to take itself seriously and is on a serious topic - existential risk - there’s no science behind any of it, and all of it could be re-written as ‘problems humanity has to deal with or things may get bad.’ But we already know that. Nothing in here is new - literally not one page of it. Moreover, his solutions contradict themselves.

He says nukes are scary and we should have fewer of them - every kid over the age of 10 agrees. He says climate change is an existential risk, which all serious people know. He says AI might be a risk in the future. Thank you, I’ve read the news more than once in the last 10 years so I am well aware. That’s it - literally, those are the three risks, with some pandemic stuff and others thrown in for seasoning. He then jumps around across timelines ranging from centuries out to hundreds of thousands of years out. So... ?

There’s tons of fluff. Try this sentence out for size on page 45: “People matter equally regardless of their temporal location.” I can’t help but ask: how and why? Under what philosophical approach? Do we value the living more than dead, or do we live for those still to come? And what is that concept rooted in, and how does that define good and bad across time? I’d say I’m being dense but you’re the Oxford philosophy professor, so if you don’t know, I don’t know who else to ask.

Every sentence that does seem profound starts with ‘suppose there is a 20% chance of X’ (say, the sun exploding). Ok - but how was that 20% number calculated? Never explained. But that doesn’t stop him from doing some basic math along the lines of ‘if that risk increases threefold, now we have higher risk.’ Again - what?

Even the math he does include (finally) on a single page (178) is not helpful at all. “Suppose F(sq) = a status quo value of great power competition.” I’ll pay anyone $100 if you can send me a note with a value in that formula that is mathematically or logically deduced and can be defended.

The entire book is devoid of units of measurement, so we get things that appear deep until you think about them for more than 10 seconds: “cost effectiveness = importance x tractability x neglectedness.” Mmm interesting. Wait, how can I actually apply that idea to anything at all? How much neglectedness is a lot? That doesn’t make sense, or is interesting as a purely theoretical idea in a classroom. He acknowledges as much: “even though it is very difficult to assign precise numbers to these dimensions...” One might say impossible with any sense of practical applicability. Which is the summary concept of this book.

Lastly, what tipped from 2 star to 1 is his quip about Epicurus. One of my least favorite tics of intellectuals is coming up with one simple thing that they think proves the ancient minds were totally wrong. Page 48 - “Epicurus argued that your death cannot be bad for you, since you are not there to experience it. What this neglects is that if I step out into traffic and die, my life as a whole will be shorter and therefore worse.” Oh. Epicurus didn’t think about dying meaning a shorter life! Except he did, and you know it, as did all the Stoics who enjoin us to not think about life as time spanned, but experiences / life ‘lived’ (alternatively, they proposed a different unit of measurement than most of us use reflexively, something I desperately wish the author would do). Which is why every intelligent person throughout history has essentially said something along the lines of ‘a man may have lived long, yet lived little.’ But not this author - he proved those old people are silly with his traffic experiment.
Profile Image for Fin Moorhouse.
73 reviews108 followers
November 5, 2023
The book discusses the risks of catastrophic events that destroy all or nearly all of humanity’s potential. There are many of them, including but not limited to the Hollywood scenarios that occur to most people: asteroids, supervolcanoes, pandemics (natural and human-engineered), dystopian political ‘lock-in’, runaway climate scenarios, and unaligned artificial general intelligence. The overall risk of an existential catastrophe this century? Roughly one in six, this author guesses: Russian roulette. Clearly, mitigating existential risk is not nearly treated like the overwhelmingly important global priority it is: not in our political institutions, nor in popular consciousness. Anyway, it’s excellent– highly recommended.

It was also full of some fairly alarming and/or surprising facts. So in place of a full review, here are some highlights:

The Biological Weapons Convention is the international body responsible for the continued prohibition of bioweapons, which on Toby’s estimate pose a greater existential risk by an order of magnitude than the combined risk from nuclear war, runaway climate change, asteroids, supervolcanoes, and naturally arising pandemics. Its annual budget is less than that of the average MacDonald’s restaurant. (p.57)

Remember that quotation attributed to Einstein that “If the bee disappeared off the surface of the globe then man would only have four years of life left”? Firstly, it’s not true– a recent review found that the loss of all pollinators would create a 3 to 8 percent reduction in global crop production. Secondly, Einstein never said it. (p.118)

Technological progress is really hard to predict – “One night in 1933, the world’s pre-eminent expert on atomic science, Ernest Rutherford, declared the idea of harnessing atomic energy to be ‘moonshine’. And the very next morning Leo Szilard discovered the idea of the chain reaction. In 1939, Enrico Fermi told Szilard the chain reaction was but a ‘remote possibility’, and four years later Fermi was personally overseeing the world’s first nuclear reactor.” Furthermore, at the start of the 20th century, many thought heavier-than-air human flight to be impossible. Wilbur Wright was somewhat more optimistic, guessing it to be at least 50 years away; 2 years before he invented it. (p.121)

The UK has four levels of ‘biosafety’. The highest level, ‘BSL-4’, is reserved for research involving the most dangerous and infectious pathogens. The 2001 outbreak of foot-and-mouth disease caused economic damages totaling £8 billion and the slaughter of some 6 million animals to halt its spread. Six years later, a lab was researching the disease under BSL-4 security. Another outbreak that year was traced back to a *leaky pipe* in the lab, spreading the disease into the groundwater. “After an investigation, the lab’s license was renewed—only for another leak to occur two weeks later.” (p.130)

Nuclear near misses have been *terrifyingly* frequent. The book lists more than ten examples. Here are a few:

1958. “A B-47 bomber accidentally dropped a nuclear bomb over South Carolina, landing in someone’s garden and destroying their house.” (the warhead remained in the plane)

27th October 1962. Four nuclear submarines had been sent by the Soviet Union to support their military operations in Cuba during the height of the Missile Crisis. A US warship detected one of these submarines and tried to force it to surface by using depth charges as ‘warning shots’. The submarine had been underwater for days and had lost radio contact for as long– and with it, information about the situation unfolding above. Moreover, being designed for the Arctic, the submarine was breaking down in the tropical waters. Temperatures ranged from 45°C to 60°C as carbon dioxide began to accumulate. Crew members were falling unconscious. The captain, Valentin Savitsky, guessed by the bombardment that war had broken out. He ordered his crew to prepare the submarine’s nuclear weapon. “On any of the other submarines, this would have sufficed to launch their nuclear weapon. But by the purest luck, submarine B-59 carried the commander of the entire flotilla… [who] refused to grant it. Instead, he talked Captain Savitsky down from his rage.” (p.4)

28th October 1962. The very next day, a US base in and US-occupied Japanese island received by radio an order to launch its nuclear arsenal. “All three parts of the coded order matched the base’s own codes, confirming that it was a genuine order to launch their nuclear weapons.” Captain William Bassett took command and became responsible for executing the order. But he grew suspicious– a pre-emptive strike should already have hit them, and the threat level was set to DEFCON 2 rather than the highest level of DEFCON 1. Yet, he radioed the Missile Operations Centre to check, and received the very same order. A lieutenant in charge of a different launch site told Bassett he had no right to stop the launch given the order was repeated. “In response, Bassett ordered two airmen from an adjacent launch site to run through the underground tunnel to the site where the missiles were being launched, with orders to shoot the lieutenant if he continued without either Bassett’s agreement or a decleration of DEFCON 1”. Bassett then called the Missile Operations Centre again, who this time issued an order to stand down. This story is still disputed and was only made public in 2015.

26 September 1983. Just after midnight, the Soviet early-warning system designed to indicate nuclear launches from the United States showed five ICBMs heading towards Russia. The duty officer, Stanislav Petrov, was under orders to report such a warning to his superiors, who in turn were instructed to retaliate in kind with immediate effect. “For five tense minutes he considered the case, then despite his remaining uncertainty, reported it to his commanders as a false alarm.” (p.96)

Norman Borlaug was an American agronomist who developed new high-yield and disease-resistant varieties of wheat during the so-called ‘Green Revolution’. He is often credited for having saved more lives than any person who ever lived. Estimates range from 260 million to over a billion lives saved. (p.97)

Finally, the total amount of money expended annually on researching and mitigating existential risks is dwarfed by the amount of money spent annually on ice-cream by about two orders of magnitude.

Again, strongly recommended; but not remotely comforting.

Hilary Paynter
Profile Image for Alexander.
68 reviews61 followers
October 21, 2021
A beautiful and profoundly inspiring book.

Toby Ord is a philosopher who completed his graduate studies at Oxford under the supervision of Derek Parfit. Parfit is widely considered one of the most important and influential moral philosophers of the late 20th and early 21st centuries. Parfit's philosophical interests primarily centred around identity, rationality, and morality. Ord was a founding figure in Effective Altruism and currently serves as a Senior Research Fellow at the Future of Humanity Institute.

Ord is interested in how we can do the most good. He started out being focused on alleviating poverty but then switched his focus to existential risk reduction out of a conviction that the latter is more impactful. He is nevertheless still involved with organisations working on poverty alleviation. In one of his interviews, Ord argued against "criticising each other for working on the second most important thing."

Existential risk (x-risk) sounds like a guilty pleasure for philosophers to contemplate as they spend their lives up in the clouds, detached from the everyday worries of ordinary folk. The lives of those suffering elsewhere in the world today seem to matter very little to most of us, let alone the lives of yet-to-be-born future generations. However, with the right rhetoric and argument, this book compels the reader to take the x-risks facing humanity seriously. Ord's beautiful writing meticulously argues for our duty towards our ancestors and our heirs, the many lives not with us today.

This book manages to achieve the goals it sets out for itself without resorting to fear-mongering, despite the nature of its subject matter. It is lucid and poetic. The Precipice continues to be formative in shaping my worldview.

Morality is uncertain not only because our knowledge about the world is probabilistic and incomplete but also because we are uncertain about what we ought, morally, to do given the diversity of moral theories. Even with complete epistemic and ontological certainty about the world, moral uncertainty would still remain.

Despite a high degree of moral uncertainty and a wide range of possible moral theories, there are still certain actions that seem highly valuable in any theory. X-risk reduction is one such action. X-risk reduction has the potential of becoming a uniting common cause. No matter what one values, one ought to want that value to continue being realised than come to a complete halt. (This argument doesn't work for everyone; e.g., an antinatalist would disagree.)

Toby Ord draws insights from various fields other than philosophy to construct his arguments—the sciences, computing, history, law. Ord argues that we live in an age of heightened x-risk due to such powerful technologies as artificial intelligence, biotechnology and biowarfare, nuclear weapons, as well as climate change. These anthropogenic risks are on top of the natural risks that exist regardless of technology such as naturally arising pandemics, super volcanos, asteroids and comets.

Ord assigns best guesses to the likelihood of certain risks resulting in an existential catastrophe in the next 100 years. He sets a probability of 1 in 10 to artificial intelligence, 1 in 30 to engineered pandemics, 1 in 1000 to climate change, and 1 in 1000 to nuclear war. Total x-risk is estimated with a probability of 1 in 6, which accounts for both anthropogenic, natural risks as well as unknown unknowns.

Despite these high probabilities, Ord is not alarmist in his writing. He is calm and reassuring. Ord spends a fair portion of the book refuting bad arguments for extinction. He argues that it would be difficult for complete extinction to occur because research scientists in Antarctica and military personnel aboard nuclear submarines will likely survive even in the most apocalyptic scenarios we can imagine.

International cooperation is a major theme in this book and I see it as a centrepiece of x-risk reduction. How will we get such culturally diverse groups of tribalistic mammals to listen to one another, seek to understand and agree on a common cause and work together towards its achievement?

Ord speaks of The Long Reflection. Once we’ve made it past The Precipice, where will we want humanity to go? What will we want to accomplish in the cosmos? How do we want to go about continuing the story?

Ord argues that if we wanted to work on reducing x-risks, then we shouldn't try to be too "meta" about it. He argues that it is more effective to work on x-risk reduction directly. If one wants to help but cannot work on x-risk reduction directly, then the next best alternative is to donate to organisations that are working on x-risk reduction. Note that Ord is not saying that you ought to work on x-risk reduction. He is simply offering his views on how to do it effectively if you do want to work on the problem.

The book has 480 pages, but the main text is only about 250 pages, and the rest is notes, references, and appendices. I appreciated this and found that this decision made the book easier to read. If I wanted to get into the gnarly logical arguments, then I could refer to the notes at the end of the book. Ord doesn't burden the reader with overwhelming detail.

I am hopeful. Life will do its very best to continue its existence in the universe.
Profile Image for Jan.
3 reviews24 followers
March 5, 2020
The idea of the whole existence of humanity being threatened got so much attention in sci-fi that for many people it's somewhere in the vicinity of "aliens landing", if not "zombie apocalypse".

Toby Ord convincingly argues that this is not the case, the chances that humanity may go extinct in near future are not that small. Actually I think the likelihood of dying because of some global catastrophe is larger than e.g. the risk of dying because of a traffic accident. Taking this seriously could be quite a perspective-changing event.

What's also striking is how tractable is it to do something about some of the risk scenarios.

Overall highly recommended.
Profile Image for Jake.
239 reviews49 followers
April 26, 2020
The following excerpt from Toby Orb's book is from his estimated probabilities that within the next 100 years our entire species will go extinct, or our civilization will collapse to irreparable degree:

“Existential catastrophe via: Asteroid or comet impact
∼ 1 in 1,000,000

Existential catastrophe via: Supervolcanic eruption
∼ 1 in 10,000

Existential catastrophe via: Stellar explosion
∼ 1 in 1,000,000,000

Existential catastrophe via: Total natural risk
∼ 1 in 10,000

Existential catastrophe via: Nuclear war
∼ 1 in 1,000

Existential catastrophe via: Climate change
∼ 1 in 1,000

Existential catastrophe via: Other environmental damage
∼ 1 in 1,000

Existential catastrophe via: “Naturally” arising pandemics
∼ 1 in 10,000

Existential catastrophe via: Engineered pandemics
∼ 1 in 30

Existential catastrophe via: Unaligned artificial intelligence
∼ 1 in 10

Existential catastrophe via: Unforeseen anthropogenic risks
∼ 1 in 30

Existential catastrophe via: Other anthropogenic risks
∼ 1 in 50

Existential catastrophe via: Total anthropogenic risk
∼ 1 in 6

Existential catastrophe via: Total existential risk
∼ 1 in 6”

Toby Ord. “The Precipice.”


The appearance of covid-19 is the first time in my living memory that a single event has created massive seismic ripples across the world stage. It has, stunted economies, led to massive political fights, misinformation campaigns, a rise of an appreciation of authoritarianism, and at this time of this writing, it has taken nearly a quarter million lives. We, are, as can be seen here, still vulnerable to pestilence. Of course, though, we must be aware, that these vulnerabilities, this unprotected underbelly of our global societies as susceptible to many more dangers than covid-19. The book "The precipice” is a great study of these dangers.
This book continues the tradition of viewing humanity in its history of deep time- that being the history since its inception something 200,000 years ago in Africa from arboreal apes and other hominoids to the future. This carried on the tradition popularly expressed in the books of Yuval Harari and others arguably like Sagan, Tyson, Asimov, and at times smil, who take a grand cosmic view of our species. Ord describes a bit of our history and soon moves into an explanation of our potential futures. These futures range from the total annihilation of the species, to one of his musings early on in the book where he said : “The future of a prosperous humanity is extraordinarily bright”. This schizoid perspective stems from his reflection as to what he sees as our prospects. Joining of course the popular trend these days of futurists of whose ideas are that now is the most important times in history. This is because we are presently close temporally to a massive list of breakthrough innovations, from biotech in CRISPR, deep brain stimulation and prosthetics, to robotics and A.I. We are at the infancies of these technologies, but further we are at the infancy of our wisdom with the possibility with very advanced tech. He further advances his thesis by discussing our ever increasing ability to harness power which naturally brings to mind Smil’s grand text : https://www.goodreads.com/book/show/3....
All in all the thesis stands, we are at a point in history that he refers to as the precipice- this is where we are at a point where given our technological progress we can choose to prevent our selves from falling into massive disaster, or we can trudge forward ignoring all possible signs of danger. If you look at the quotation form the book above, you will seen the many possible ways in which we as a species can reach an existential risk which can strictly be defined as either our extinction or by a disaster which sends the few surviving members of our species in the technological equivalent of a dark age.
Now of course, one may say that is a bit melodramatic but he makes the case powerfully. The story of when our cosmologically young species reached this point in history was shy after Rutherford called the liberation of energy from an atom moonshine, and Einstein and Szilard sent out the famous letter to Roosevelt . It was at the first testing of the atomic bomb. At this point in history it became clear that we as a species had given ourselves a distinct ability to bring about our destruction . Or to quote the oft mentioned line mumbled by the late leader of the manhattan project - Oppenheimer - upon reflecting on the bomb’s creation : “Now I am become Death, the destroyer of worlds”. The proverbial doomsday clock moved closer to midnight at that moment of its testing. And of course, beyond Hiroshima and Nagasaki, when nuclear information not only spread, but the curbing of nuclear proliferation failed, certain moments made it clear that our species was in danger. First the story the book opened up with :
https://en.m.wikipedia.org/wiki/1983_...

And of course, the cuban missile crisis, which some have said we only survived, not by the genius of Kennedy, nor the theory of MAD being a deference but by dumb luck.
Sadly though, it is not only nuclear missiles which could cause an issue, but in his eyes, the possibility of an engineered pandemic (no I’m not standing by the idea that covid -19 was built in a lab.), climate change, a great war between nations, and possibly most frighteningly: A.I alignment. Which in short, is the idea that A.I may be build with the wrong initial programs to function well with humanity. This is best expressed in the following book from Stuart Russell :https://www.goodreads.com/en/book/sho...
Overall, in his assumption our greatest danger is ourselves.

I should end this review with admitting that despite the nature of the grim topic, Ord was actually quite optimistic. Like in the following two books written by pinker:
https://www.goodreads.com/book/show/1...
https://www.goodreads.com/book/show/3...
…Ord discusses what appears to almost be a moral compass of humanities improvement and that by most metrics humanity has improved. He then extends the same idea forward saying, perhaps we can save ourselves when it is not to late, and come up with the social infrastructure to handle all risks to our species natural and artificial, nuke, and astroid, by simply working together and focusing on the right things, just has , in his eyes, we have worked together and abolished a great deal of violence, infectious disease, increased drinking water, decreased child mortality ect.
He states that existential risk is a fairly new issue, and that for most of human history this was not a thing people had to worry about. It is in his eyes, that its study began with early letters such as from Betrand Russell and Einstein warning about nuclear missiles : https://en.wikipedia.org/wiki/Russell....
He hopes that at some point humanity may gain wisdom to the level of our great intellect and perhaps we can save ourselves from the oncoming peril.
I should add that this is quite a weird book, in that he does not make this hypothetical but recommends exact websites to help humanity prepare for a possible hellish storm:
Effectivealtruism.com and 80,000 hours.com
He further even mentions possible solutions. Here are some:

Better communication across great nations may decrease nuclear war (like the red telephone in the cold war)
create a world government? Or a 3rd party to be a watch dog for threats.
Fix institutions related to risk
we can fix The Who and it’s ability to respond to pandemics (Yes he said this before covid.)
DNA synthesis screened for dangerous pathogens

He then continues to even mention jobs that the reader can go for. If you’re into computer science, you can study A.I to make sure things don’t go to shit, and if you are in Bio/medicine you can switch over to studying pandemics (I know..).

Overall, this was a brilliantly researched book, and I highly recommend everyone read it.
Profile Image for Ryan Boissonneault.
201 reviews2,151 followers
January 8, 2021
Humanity is currently living in its infancy; barely 200,000 years old, our brief span of time on earth pales in comparison to both the billion-year history of the earth and universe that came before us and to the hundreds of millions of years (or longer) that potentially lie ahead of us.

At the same time, while humanity is relatively young—with the potential for its greatest discoveries, inventions, and moral progress ahead of it—we have also reached a threshold where we have achieved the potential to both cause—and prevent—our own extinction.

In The Precipice, Oxford philosopher Toby Ord synthesizes over a decade of research to argue for the urgency of preparing for and reducing the risk of existential catastrophe, defined as any event—natural or human—that stands to wipe out humanity’s long-term future potential either through the extinction of the species or through unrecoverable civilizational collapse. These existential catastrophes include events such as asteroid/comet strikes, volcanic eruptions, climate change, nuclear war, pandemics, unaligned artificial intelligence (AI), and more.

We predictably underestimate these risks, because, by definition, they have never occurred and can never have occurred if we find ourselves in a position to be discussing them (survivorship bias). This creates the impression that they will never happen, or that the probability of them happening is close to zero. But according to Ord, not only are existential catastrophes less remote than we might think, we’ve actually been close to facing them in the recent past. The various close calls of either deliberate or accidental nuclear war—during the Cuban Missile Crisis, for instance—remind us just how close we actually came.

So these risks, while seemingly remote, are nevertheless real and deserve far more attention than they are given. As Ord points out, the world is currently so oblivious to these risks that it spends more money every year on things like ice cream than on the research that could help us understand the events that could end our very existence (e.g., the Biological Weapons Convention has four employees and an annual budget of 1.4 million, which is less than the average McDonald’s restaurant). This general underestimation of risk—along with the undervaluation of public goods by the market—could very well be our undoing.

Having outlined the risks and describing what’s at stake, Ord proceeds to present the moral case for the prevention of existential catastrophe, which should be fairly clear—the prevention of billions of deaths and the protection of the future potential of humanity. Although this is difficult to argue against, it does raise the following question: Why should we care about future generations?

As Ord points out, we should care about those separated by time in the same way we care about those separated by space: just as geographical location does not make a human life more or less valuable, neither should location in time. Additionally, we have benefited in numerous ways from the work, dedication, and innovations of our ancestors and we stand in a position to pass this on to future generations without causing undue harm and destruction. This is our obligation to the continuation of the human project.

Of course, the philosophy of moral obligation is complex, but to me, the case for our obligation to protect the future potential of humanity is far stronger than the case against it, which ultimately betrays a fundamental lack of gratitude, basic decency, and respect for the overall human project. Since we have benefited in countless ways from the inherited culture and discoveries of past generations, we can fulfill our duty by “paying it forward” to future generations by using our scientific knowledge to protect them from existential risk.

The moral case is further solidified by its asymmetrical nature. As Ord wrote:

“The case for making existential risk a global priority does not require certainty, for the stakes aren’t balanced. If we make serious investments to protect humanity when we had no real duty to do so, we would err, wasting resources we could have spent on other noble causes. But if we neglect our future when we had a real duty to protect it, we would do something far worse—failing forever in what could well be our most important duty. So long as we find the case for safeguarding our future quite plausible, it would be extremely reckless to neglect it.”

Having established the moral case for the prevention of existential catastrophes, Ord proceeds to analyze the existential risks themselves, what the science tells us about them, the probability of each catastrophe actually occurring, and what our short- and long-term plan of action should be.

Admittedly, it’s in this part of the book that Ord seems (at times) to be arguing against his prior points by showing us how unlikely the risks really are and how improbable it would be for many catastrophes—even nuclear winter and extreme climate change—to actually wipe out all of humanity. (Even the Black Death, which killed half the population of Europe, didn’t wipe out humanity’s future potential.) And so you’re left with the impression that existential risk, while entirely possible and worthy of our attention, is in fact a fairly remote possibility.

That is, until Ord greatly exaggerates the dangers of AI, which he takes to represent the single greatest risk humanity faces. If this is true, this is actually good news, because the prospect of general AI enslaving humanity is far lower, in my estimation, than an asteroid strike.

Let’s review some of the risks in a little more detail.

Starting with natural risks, Ord shows us that even though our scientific understanding is incomplete, our best science—along with our understanding of mass extinction events through an analysis of the fossil record—shows us that natural events that have the potential to cause mass extinction (asteroid/comet impacts and supervolcanic eruptions) occur only once every million or hundreds of millions of years. The probability of humanity facing an existential crisis of this sort over the next hundred years is therefore estimated to be maybe 1 in 10,000—in other words, very remote. This does not mean that unforeseen risks do not exist, or that we should stop studying volcanoes or asteroids; it only means that we can safely assume that humanity will not go extinct or suffer civilizational collapse from natural causes in the next 100 years.

Ord next considers anthropogenic existential risks, including nuclear war, climate change, pandemics, and unaligned general AI. Again, it should be pointed out that Ord is considering only existential risk—the permanent wiping out of humanity or civilization. While the consequences of climate change are likely to be disastrous, Ord is only concerned with the question of whether it would end humanity’s future potential. In this respect, climate change likely falls short, as Ord himself details. While there is a possibility of a runaway greenhouse effect, the science seems to suggest that this effect—the only likely climate scenario to truly present an existential risk—is unlikely to happen.

This does make you wonder why Ord decided to take this particular perspective. By setting the bar at the end of humanity, very serious risks are seemingly downplayed. Wouldn’t it be more preferable to view these risks as global risks, catastrophes that could affect the entire planet or very large segments of the human population? I’m not sure what you gain by setting the bar so high. We should be paying attention to all global problems, not just the ones that will permanently wipe us out. Nevertheless, Ord insists, for the purposes of this book, to view only the risks that could end humanity’s entire potential, and climate change and nuclear war are not likely candidates. But then again, neither are engineered pandemics or artificial intelligence, as far as I can tell.

Of course others will disagree, and the reader can make their own decisions. But when Ord starts describing AI systems that will use the internet to accumulate power and financial resources for the purposes of enslaving humanity, he has lost me. I’m not interested in reading about highly speculative risks, especially when it overshadows the discussion of more immediate risks like the elimination of jobs by AI and automation. AI, climate change, and nuclear war may not wipe out humanity, but there are plenty of other disastrous scenarios that fall short of this that are not discussed because they don’t meet Ord’s end-of-the-world criteria.

The other issue with the book is the assignment of probabilities. With natural risks, this is less of a problem. Since the world has experienced comet strikes, volcanic eruptions, and mass extinction events in the past, we can get a rough estimation of the frequency of such occurrences. And so Ord’s claim that there is a 1 in 10,000 chance of humanity suffering a natural existential catastrophe in the next hundred years is reasonable.

But what about events that have never occurred? Ord tells us that there is a 1 in 6 chance that unaligned or malicious AI wipes out humanity or causes civilizational collapse in the next century (this is a 16 percent chance!). But where is this number really coming from? While it’s important to be precise, you can’t achieve this level of precision simply by assigning a specific number to what you subjectively believe to be the case.

Since we are so far away from any scenario where AI takes control over humanity, it’s impossible to tell what the actual probability is: maybe it’s 1 in 6, maybe it’s 1 in 10,000, but there are many reasons to think the risk is lower than Ord is claiming. It’s highly likely that when we develop the capability to build a general AI system (if we ever can; see Steven Pinker’s analysis in his book Enlightenment Now), we will be prudent and capable enough to also build in the appropriate safeguards. Further, there is no more reason to think the AI systems we create will be malicious than there is to think they will be benevolent. After all, computer code does not have consciousness or inherent evil motivations, outside of the film industry. So there is very little reason for me to give this number any kind of credence at all, even if some AI researchers happen to agree with it.

But AI does not need to enslave humanity for us to be worried about it. And that’s what’s frustrating about this book. We should be discussing the more immediate effects of AI on things like the elimination of jobs, or the effects of climate change on the poorest regions of the world. By setting the bar at extinction-level events that are truly remote, we avoid the conversations about the things that matter now. We should instead re-frame these existential risks as global risks and set out to solve them using global solutions. As we learn more about these risks and solve the more immediate and localized problems, the existential risks associated with these areas should be automatically reduced.
Profile Image for Otto Lehto.
453 reviews171 followers
May 13, 2020
From the opening words to the closing paragraphs, you can sense the urgency at the tip of the author's tongue. Mild mannered though his philosophical style may be, there is a sense of apocalyptic poetry in the channeled desperation that drips from the pages like molten candle wax. Even in good times the feeling of existential anguish is no stranger to any sane persons' sensibility. Bad times all the more weigh heavily on our hearts. The extra blanket of terror that has settled down on humanity as a side product of globalization and the nuclear age has permeated our awful nightmares. What will a human future look like? Do we even HAVE a future? Contemplating our demise, the eternal darkness of humanity, can be paralyzing, senseless, and necessary.

Toby Ord's greatest achievement in The Precipice is to play the "existential dread" card tactfully. You can see that the author is a philosopher who prefers reason to passions. He allows the feeling of anguish to momentarily tug on our terrified heart strings in order to motivate our passions sufficiently to get the reasoning going (following the sentimentalism of David Hume). But he then forces the passions to abscond while he rolls out the red carpet for calculating reason as soon as the analysis demands it. As a result, the analysis is mostly confined to a "safe space INSIDE terror," a space of statistics and numbers, a court of motivated reasoning surrounded by a sea of terror.

The book does not shy away from controversial but empirically supported positions. For example, Ord emphasizes that global warming is only one among many threats that we face and that it has only limited capacity to pose an existential threat to humanity (as opposed to a severe but nonlethal economic, social, and environmental threat of the kind that is already clearly does). He does not shy away from showcasing the irrationality of the opposition to civilian nuclear power while highlighting the pervasive and lingering threat of nuclear genocide through war. He uses credible statistics to show that an asteroid impact, although a real existential threat in the long run, is not an imminent one, so we should be more urgently worried about other threats. He ends up emphasizing the existential threat of A.I. and technological development as perhaps the most potent threat that humanity faces in a time when few people seem to care about it much.

Powerfully, to motivate long term survival, the author sings sweet songs about the untapped potential of human evolution throughout the book, and especially at the final chapters. There is a surprising amount of visionary thinking going on, not only in terms of dystopian futures and species extinction, but also about potential lands of milk and honey at the end of history. This hammers home the real opportunity cost of extinction: the negation of future utopias. And I agree: unimaginable utopias have the potential of being realized if only we play our cards right. Preventing the extinction of human potential as the fountainhead of future development is a much greater reason to keep on the straight and narrow than the survival of the flawed present. This, however, is a realization that does not come naturally to most people without extensive motivation.

Following from the above, I have my disagreements with Ord's methodology and conclusions. For example, Ord attempts to show that we should care about future generations (almost) as much as we care about today's generations. I think that this is practically hard to motivate. Most people simply cannot be bothered to think about future generations or to know what to make of it. Furthermore, if we really took unborn lives to be worth so much, how can we justify abortion or even contraception? Even abstinence would be equivalent to a minor genocide, it seems. It is not obvious that we should care about potential people who do not exist and may never exist. He also takes some cheap shots at Epicurus's arguments about why we shouldn't worry about death without bothering to argue against it. As a philosopher himself, I would have expected more of him.

Secondly, the category of existential risk downplays all risks that fall short of it but seem to be something we should worry about. This means that the category is perhaps too strict and not very illuminating. It says little about threats that allow humanity to survive but in a crippled or miserable state. Thinking about existential risk as a separate category from ordinary risks that annihilates the very possibility of humanity is certainly a useful distinction to make. But I feel that, for many people, there isn't much of a difference between, say, a nuclear winter that annihilates 95% of humanity and an existential catastrophe that wipes out 100% of us. If people could avert the latter by bringing about the former (especially if they themselves and all their loved ones would be wiped out in the process), I feel that it wouldn't be much of a consolation. So, I think that existential risks and near-apocalyptic catastrophes are pretty much in the same ballpark of "super awful". It is going to be hard to convince many people, myself included, otherwise.

All that said, Ord's book shines as a warning beacon AND a Promethean torch. The book's ambiguous message combines techno-utopian hope with techno-dystopian terror. It challenges our common assumptions about which particular categories of risk should we most worry about by laying down some facts and figures that are bound to be instructive. And it contains a criticism of our short-sighted institutions. Markets are driven by quarterly capitalism and politics is driven by the electoral cycle. None of this is capable of thinking decades, let alone centuries, ahead. But the lessons are crucial. Some of the math about expected costs and benefits is hard to compute, and we can debate the normative dimension of caring about future generations, but we should learn to recognize the category of existential risk as a separate policy task that demands care.
Profile Image for Vidur Kapur.
129 reviews48 followers
May 16, 2020
This is a beautifully written work that calls on humanity to secure its longterm future, reflect on what it wants to achieve once secure, and then go on to fulfil its potential. It's rigorous and well-sourced, with a huge proportion of the book being taken up by appendices and endnotes. It contains mathematics, but it's accessible to an educated non-specialist.

Overall, it makes a strong case for the proposition that there are "possible heights of flourishing far beyond the status quo", and that "our descendants could have aeons to explore these heights". It is therefore urgent, Ord argues, that we direct more of our resources to tackling risks that could jeopardise this future. Many of these risks have only arisen relatively recently; as a species, we've acquired tremendous power, without commensurate wisdom.

My only quibble is that the book could have explored in more detail why it is that many think the future will be net-positive, and more explicitly answered some of the objections to focusing on existential risk reduction as opposed to, say, moral circle expansion. That is, should we focus on securing humanity's future, or on making it better conditional on it continuing?

Ord does make some arguments in this space, looking at historical trends which suggest that humanity has made moral progress; talking about the idea of preserving option value; distinguishing between broad interventions and narrow interventions; and making the case for a "Long Reflection" on our values and goals after we've achieved existential security.

For example, if humanity's longterm potential is destroyed, it's irrevocable. Surely it is better, Ord argues, to let our descendants make judgments that we're not in a position, epistemically, to make.

And to take the third example, Ord doesn't advocate solely focusing on "narrowly targeted interventions"; as he notes, existential risk "can also be reduced by broader interventions aimed at generally improving wisdom, decision-making or international cooperation", and that "it is an open question which of these approaches is more effective".

Indeed, because Ord doesn't narrowly focus on extinction risk, but rather on existential catastrophe as a whole, which includes scenarios involving an unrecoverable collapse or an unrecoverable dystopia, there may be more overlap between existential risk reduction and moral circle expansion than has sometimes been assumed.

Overall, this is a book packed with interesting information, with insights from physics, economics, and moral philosophy that the reader won't have encountered before. Highly recommended.
47 reviews
November 18, 2022
This book is generally optimistic about the future potential of humanity, and provides some useful historical context on past events that represented possible existential risks that turned out (obviously) to not fully materialize. If this weren't published and read at a time of global crisis where institutions around the world, including those Ord places as central to the safeguarding of humanity's future, are collapsing or otherwise revealing their lack of concern for knock-on effects related to greed and political expediency, I might not be so down on it. The prescriptive roadmap he lays out is sensible but comes with the huge caveat of requiring international cooperation and goal alignment to a degree that appears more and more unattainable in the current political climate.

The writing is also very dry. This book was not an engaging read, and only a little more than
half of the physical real estate in the book is actually "the book," with the remainder being appendices, notes, and citations.


November 2022 addendum: lol @ "effective altruism" and crypto freaks. Bumping this down to 1 star from 2 because the world is so, so much worse than it was in April 2020. I'm running for President on a platform to drone strike all philosophers on knife crime island.
Profile Image for Nilesh Jasani.
1,051 reviews187 followers
May 22, 2020
Precipice nicely catalogs existential risks. It does a commendable job in separating the genuine extermination events from others that could be catastrophic but won't result in our complete annihilation.

That said, neither of the two things that the author attempts results in a lasting impression or change for ordinary readers:

The author lists about a dozen different things that could wipe out humanity. Expectedly, most readers are likely to be well aware of all these nightmares. The individual sections are brief and with hardly any new details compared to what one may know from general newspaper articles on those subjects or even from Hollywood movies (almost every one of those risks have had movies made on them).
The section quantifying the risk is not only extraordinarily subjective but nihilistically pointless to a degree. The author is right in explaining why the subjective nature of the quantification should not become a reason for not doing the exercise. Yet, this does not remove the fact that these probabilities are not of much use to ordinary readers. And, it also does not eliminate the need for the theoreticians to find a way to agree on the ranges rather than each one espousing his or her own set.

It is sensible for the author to appeal for far more resources to prepare us against the worst of these risks. Yet, the fight against these is unlikely to be top-down with some big global institution fighting all of them centrally. The work will be bottom-up, decentralized in various nations and regions, in different set-ups for different risks and continually evolving. For example, it is fanciful to assume that those protecting against volcanic threats should also work under the same umbrella as those highlighting the need for environmental clean-up or monitoring asteroids.

When one looks at the bottom-up work being done in the fields mentioned, the resources devoted are nowhere as pitiful as the author makes them out to be. Undoubtedly, a lot more needs to be done on larger existential threats - most of which are anthropogenic - like in the climate change arena, the pandemic threats, containing the AI and global disarmament.

Like every individual human, every life form or specie, just life the stars and the galaxies or even the universe will end one day. We still need to ensure - as the author says - that most of our race's evolution is in the future and not behind. The book should be commended for picking up on this critical point.
Profile Image for Ben Chugg.
11 reviews33 followers
August 17, 2020
It has taken me some time to sort through my feelings on this book. Toby Ord is smart, articulate, and makes a compelling case for longtermism: The thesis that future generations deserve moral patienthood and that they should be factored into our moral calculus; that we should take their welfare seriously, and ensuring that the future goes as well as possible should be one (if not the largest) of our moral concerns. Clearly, the consequences of adopting this worldview could result in a reshuffling of our priorities. It is thus a thesis worth examining, and one of which everyone should be made aware.

Future generations are certainly neglected in many, perhaps most, of our institutions and decision making processes. While they inherit the status quo, they have no say in creating it. They have no representation in government, no power to protest, no ability to vote. While this situation is difficult to remedy, one way in which we can help future generations is to ensure their existence by not destroying humanity; i.e., we should mitigate existential risks. Cataloguing such risks and ordering them by their likelihood is the theme of the book.

However, some of the reasoning style and the underlying logic is … odd. The book is steeped in Bayesian epistemology (unapologetically, I believe, for this seems to be the implicit philosophy in effective altruism circles), and draws no distinction between objective and subjective probability. Ord begins by examining “natural existential risks” (e.g., super-volcanoes, asteroids). To get a handle on their likelihood, he uses data based on previous frequencies (how volcanoes erupt, how many asteroids hit earth, etc). Upon switching to “anthropogenic risks” he ditches frequentism (there are no data points for nuclear armageddon) and adopts subjectivism: he tries to quantify his belief that the world will end in one of several ways. This switch is subtle and is left unacknowledged. Indeed, both kinds of probabilities are expected to be treated in the same way by the reader (as exemplified by Table 6.1, in which all of these numbers are compared).

Bayesians will undoubtedly respond that there is no other way to reason about such one-off-events; subjective probability is all we have. I am becoming increasingly convinced that this is wrong, and moreover, quite dangerous. The alternative, I think, is to acknowledge that some future scenarios are steeped in so much uncertainty that attempting to generate the most accurate number to capture the future (i.e., a prior), is quite pointless. Numbers are not primitive; ideas and arguments are. Knowing Ord’s subjective belief that AI will take over the world is unhelpful; knowing his best argument is. At times the book is filled with such arguments; at others arguments are replaced by priors, a move I find unprofitable.

(I should say that I was once sympathetic with Bayesian reasoning, but have been slowly beaten over the head with alternatives. Many of the above criticisms come directly from those conversations.)

In sum, the book expounds an important idea, but analyzes it in worrying ways.
Profile Image for Tobias Leenaert.
Author 2 books149 followers
July 27, 2021
4 stars just for bringing this important topic to another level.
Writing style is clear and very accessible but at the same time kind of dry, though in the last part ("our potential") it gets a bit more lyrical and nicer.

As someone who's concerned also about the plight of non-human animals on this planet, the book felt quite anthropocentric (there is a line here and there about taking care of nature and other animals, but you have to look hard). Maybe that's for pragmatic reasons.

Philosophically, after reading the book I still don't see the problem with non-existence (at least after a painless exit), and I found Ord's arguments to counter that not very convincing. I share the idea quoted in the addendum on population ethics by Jan Narveson: "We are in favor of making people happy, but neutral about making happy people." So I find the idea of an eternal dystopia more problematic than extinction.


Profile Image for Ryan.
1,042 reviews
April 7, 2021
We talk a lot about the Anthropocene, but Toby Ord thinks this time might be remembered as "the Precipice," an age in which we suddenly began creating things that would allow us to off ourselves altogether. The watershed moment is nuclear weapons, but Ord considers a variety of threats to longterm human flourishing ranging from catastrophic climate change to unaligned AI to asteroid impact to a perpetual dystopia like 1984. Although I've read and watched my fair share of dystopian fiction, I was surprised by how little I knew about some of these scenarios, and especially the AI scenarios. (I guess we should take the idea of William Gibson's Neuromancer working behind the scenes a bit more seriously.) I see a few valuable ideas here. First, there is an obvious value in outlining these threats and risks for readers. Second, Ord also attempts to clarify what we should think about when we think about collapse or extinction. Both have become rhetorically useful in a way that is maybe not altogether helpful. Third, Ord nudges us to think about the longterm future. Finally, there are some policy recommendations to set up a broader policy and popular discourse around existential risk.

Recommended, and especially since this one was released just as the coronavirus pandemic started and took up all of our collective attention span. The Precipice seems a bit under read by a group that I think would otherwise have embraced the work more broadly.
Profile Image for Peter.
6 reviews16 followers
April 4, 2020
This is one of my favorite books. Delightfully written, inspiring, and lots to learn even for folks well-read around the incredibly important issue of longtermism - how civilisation can not only make it through the 21st century but go on to flourish.
Profile Image for Dan Elton.
36 reviews19 followers
January 2, 2021
I'm not sure what I can add beyond what many other reviewers have said - but I wanted to give this 5 stars and weigh in.

There are shockingly few books that tackle this important subject. My friend Phil Torres has a book (called "The End") but it's outdated, isn't as extensive, contains many tangents into religious eschatology, and doesn't lay out the moral argument as clearly as Ord.

The book is not as long as you might think (I've encountered people put off by the apparently long size). It is only 241 pages, but then contains 180 pages of footnotes and 39 pages of references! As the author states, the footnotes are almost like a second book.

I'm happy to see how much traction this book is getting. Everyone I know who has read this book got something out of it. I even know someone who changed their entire career path after reading this. This is a book I wish all of our political leaders and people in positions of power would read! If only we could get a few million dollars invested in x-risk mitigation technology (in particular for bio-risk and AI risk) we could really help safeguard the future of trillions of sentient beings! The expected ROI is huge and x-risk mitigation tech is severely under researched, both in terms of dollars and numbers of people working on it. Having a hard time fully believing what I just said? Read the book!
Profile Image for TheBookWarren.
471 reviews124 followers
August 16, 2020
4.25 Stars - This really is a cracker of a book!!! From the moment I picked up Toby Ord’s nonfiction on the state and future of humanity, I knew full well it was going to be something to sink your teeth into.. But was I wrong.. because it’s more than that, it’s to behold, to intrigue, to ponder and cogitate with at nights & then most of all it gives the urge for one to immediately share, share with as many as you possibly can, as the more that read this modern literally stunner the more that see the same “Precipice” the same signs that we’ve indeed reached that precipice, So what are we going to do about it? There are components of Ord’s premise that I disagree with, no doubt and I for one feel the world is in nowhere near as grim a state as many believe, but it’s vital a guiding coalition is built toward correcting a number of environmental and social challenges before it is too late!
Profile Image for Allison.
55 reviews9 followers
November 24, 2022
very dark. very pessimistic. had to read this for a class that i’m indifferent to. will not be reading again and am now worried about the end of humanity every day. would not recommend for someone with anxiety.
15 reviews1 follower
January 12, 2021
Lots of redundancy and repetitiveness. Felt like it should have been a 10,000 word essay, not a book.
Profile Image for Oliver Kim.
176 reviews47 followers
Read
July 11, 2022
Toby Ord's The Precipice was recommended to me at a vegan Effective Altruist dinner in Berkeley, California -- several times, in fact, by different people, and always with a frothy quasi-religious fervor. More than a few men (there were only three women at the event) had changed their careers because of it.

I'd consider myself sympathetic to Effective Altruism -- I spend my career researching ways to end global poverty in part because of the strength of its arguments -- but I've always found the movement's recent turn to existential risk somewhat odd. Somewhere along the way it felt like a social project about ending malaria and global poverty had turned into a Silicon Valley offshoot where everyone was fretting about neural networks going Skynet. So I picked up this book to find out what the fuss was all about.

The Precipice is clearly argued and blessedly well-organized, and probably deserves more than the scattered reactions I'm about to offer. I appreciated its serious statistical attempt to quantify the different risks to humanity, while still acknowledging the uncertainties involved. To summarize, in the final accounting, anthropogenic risks like a bioengineered pandemic (risk of existential catastrophe: 1/30) or nuclear war (1/1,000) far outstrip those of natural disasters, like an asteroid/comet impact (1/1,000,000) or a supervolcano eruption (10,000). The evidence Ord offers for these numbers struck me as fundamentally reasonable.

But the biggest risk of all remains artificial intelligence unaligned with human interests, with an existential risk of 1 in 10! Climate change clocks in at a mere 1/1000, Ord argues, since while it will disrupt our industrial way of life, it is unlikely to wipe civilization out entirely. It follows naturally that AI risk demands our fullest attention.

And yet I'm still not convinced. I have two main concerns.

The first is that our grasp of AI risk remains frustratingly vague. AI's risk of causing humanity's extinction seems a function of (1) its development of superior intelligence (so-called artificial general intelligence, or AGI), (2) misalignment of its objectives with human values, and (3) the successful use of its intelligence to end humanity. Given recent acceleration in AI development, (1) seems reasonable. Given our history with technology, (2) is also not much of a stretch. Since AI specialists predict that AGI will come about in the next century or so with 50% probability, Ord comes up with an existential risk of 1 in 10.

But, crucially, (3) is basically assumed to be true. And yet every EA account of how AI might end humanity is fuzzy on the details -- Ord speculates that an AI could amass illicit wealth, then somehow bribe politicians into exterminating humanity with WMDs. The problem is that advanced intelligence is conflated with a kind of villainous omnipotence. If AI seeks our extinction, it will likely have to be intermediated through human hands -- and I think Ord drastically underestimates how hard it is to organize human beings to exterminate ourselves. We may be manipulable and often stupid, but we are also unpredictable, independent actors interested in our own survival. No amount of advanced intelligence can convince some people to act against what they believe to be their interests. (Just look at our current politics.) Countermeasures can be put into place to insulate our weapons and institutions from AI manipulation. No doubt evil AGI can make human life miserable, but ending the possibility of civilization when there are human beings who would like it to continue seems another task entirely. The thorniness of politics remains in my view a general blind spot for EA.

Which leads me to my second, meta-critique: are the "existential" priorities Ord lays out truly reflections of universal human interests, or are they the result of specific quirks of the EA community?

From his utilitarian perch, Ord treats humanity as a monolithic unit with united interests. But many of these existential risks will hit different parts of humanity quite differently. Consider climate change. Sure, warming of 2-3 C may not exterminate humanity as a whole -- but it could very well render life in Sub-Saharan Africa, where much agriculture remains rain-fed, basically untenable. Are we willing to accept the extinction of the cultures and peoples who live there? And, for that matter, who is the "we" in that question? If my Berkeley EA dinner had a guest list more representative of humanity -- with at least a few more Africans and women, instead of its coterie of white and Asian tech-adjacent males -- would our decisions about what to prioritize change?

As an Asian, tech-adjacent male, I can't really answer that question. But I suspect that if the community of those who fear existential risk want to persuade others that their priorities are truly universal (and this is a critique less of Ord than EA more broadly), it should make more of an effort.
Profile Image for Wendelle.
1,715 reviews48 followers
Read
January 18, 2023
The author, an Oxford philosophy professor, proposes a shift in our mindset, our priorities, and our ethical frameworks to focus on maximizing humanity as a whole's chances of survival, and thwarting our causes of existential demise, which may either arrive due to complete extinction, unrecoverable collapse, or unrecoverable dystopia. The author enumerates some possible inflection points in our existence that may mark the point of no return: nuclear winter, climate change crisis, unbounded artificial intelligence, global totalitarian regimes that crystallize their omnipotence irrevocably through ever-present surveillance machines. He explains by turns each existential risk and tries to assign a ballpark figure for their likelihood. In Prof Ord's view, we are a species on the cusp: 200 000 years old, when our mammalian ancestors lived for 1 to 2 million years. Thus, he emphasizes what we stand to gain with a successful shepherding of our species through these critical junctures: an unbounded human flourishing, of our future generations, defying what we can currently imagine, in all the matters that are valuable to us: science, knowledge, art, culture, joy, prosperity, morality, quality of life.
Profile Image for Luke Freeman.
2 reviews26 followers
March 10, 2020
How will our species survive and thrive?

This is one of my favourite non-fiction reads to date. I love how Toby Ord leaves hype behind and uses sound reasoning to lay out the case for existential risk reduction. He has very strong arguments and a enjoyable writing style. He makes the case that existential risks are some of the the most important and neglected problems we face.

He covers major risks from nuclear and biological weapons to climate change, pandemics, artificial intelligence, asteroid impacts, and more.
52 reviews2 followers
February 25, 2021
Steven Pinker quipped that when writing a book for non-experts, the tone should be what you would use explaining what you just learned to your college roommate, who went into a different field. Someone just as intellectually curious as you, but who just doesn’t yet know about the things you are explaining. Popularizers like Neil DeGrasse Tyson are criticized for oversimplifying or waving their hands, but that is inevitable in the sort of television documentary length pieces he does on broad topics. In book-length treatments of a particular subject, an expert has plenty of space to lay out the subject correctly and to point out where she is simplifying, pointing to resources for the interested reader. Books like Dawkins’ The Selfish Gene and Guns, Germs, and Steel come to mind as well-executed examples of this genre.

It is particularly important to get the facts straight if the author wants to convince the reader to make far-reaching changes in her intuitions and behavior. I was excited to start The Precipice by Toby Ord, as I knew it was about our obligations to future generations, and in particular about the probability that some some catastrophe will wipe out humanity. This is a topic that I knew very little about, and it might have convinced me that I should do more to help prevent such a catastrophe.

Unfortunately, the book convinced me of very little. The tone of the book felt less like an honest survey of issues surrounding existential catastrophes for the non-expert, and rather a rhetorical pamphlet trying to convince me to donate to a particular charity.

Future People

There are a few well-known issues related to thinking about future people. The trickiest involve thinking about people who may or may not come into existence. Suppose that a couple plan on having a baby, but then one of them loses their job, and they decide to wait for a year or two. Has the baby who would have existed been harmed? My intuition says no. To me, something is only bad if it affects a person that exists. From this perspective, the extinction of the human race isn’t so bad per se, because the people that would have otherwise existed are not harmed. If I had never been born, there would be no me to regret it.

This sounds like pie in the sky stuff, but it is actually critical to the idea that we should sacrifice our well-being today for generations of distant descendents. If extinction is not much worse than mere one-off catastrophes, then the sort of calculations which go into the risk-reward scenarios in the book are way off mark. Ord does talk a bit about this issue in a page in the body and in an appendix, but he just waves his hands and says that he doesn’t personally find the view I elaborated above plausible since it comes with other issues, and anyway he is also worried about the permanent collapse of civilization which we can all agree is bad.

This sort of thing happened again and again in the book. The interesting, deep issues are brushed aside in order to make a case for decisive action on small probability existential risk. Another example is Ord’s frequent use of the concept of the ‘life of humanity’, or ‘securing our future’. Since the future of the human race could be very long indeed, we may be at the beginning of humanity’s life. We should be prepared to make sacrifices for humanity, then, just as a young adult might make sacrifices to secure a comfortable middle age. But the difference is that we will not see that future. The life of humanity is not my life or your life. That we are morally required to sacrifice for our distant descendants is nowhere argued for in the book, but simply assumed.

What is to be done?

Even if we are convinced that we should indeed be doing much more to protect distant future generations of people, it isn’t clear how we would go about doing this. Most of our actions have unpredictable consequences in the medium-run. A butterfly flapping its wings in China may cause a hurricane in the Atlantic. It isn’t clear what actions we could take to make humanity more or less likely to exist in 10,000 years. The effects of our actions at that time scale are just too unpredictable. I was waiting for this question to be addressed in the book, but it never was.

In the 241 page body of the book, only 30 pages were devoted to practical ideas for reducing existential risk, and the ideas were fairly vague on details: more research should be done, and there should be more international cooperation. This is exactly the issue. Even if we believe that the risk of human extinction is high, that doesn’t mean we should live or lives differently. It is only if it is high and we have some way of reducing it that we should take action.

Imagine we can do good in two ways. We can help someone who currently exists with certainty, or we can work on reducing existential risk, which potentially could benefit thousands of generations in the distant future. Within existential risk, of course, we would need to choose particular threats to focus on. Whatever set of threats we choose, whether it be AI alignment or preventing engineered pandemics, we cannot be certain that these will be relevant threats in the future. It may turn out that they are not as dangerous as we expected, or alternatively there may be an existential catastrophe which takes out the human race before the particular threat we focused on becomes important. Either way, by focusing on these risks we may devote our lives to solving an ex-post useless problem.

Pinning down existential risk

And how likely are each of the threats? In a chapter addressing this question, Ord chooses his own best guesses for the probabilities. He is careful to qualify them, emphasizing that the numbers are his professional opinion alone, and that they involve much uncertainty. He writes that the point is simply to get an idea of the order of magnitude. He comes up with the bottom line that there is around a one in six probability of an existential catastrophe in the next century.

I don’t find this exercise very illuminating. Ord says his numbers could easily be off by an order of three. That is, the probability of a humanity ending disaster in the next century could easily be one in seventeen probability, or one half. Since Ord comes up with these numbers from simple professional judgment, I’m worried that there are too many ‘researcher degrees of freedom’. One in six is in the convenient zone of alarming, but not too high to be rejected out of hand.

Killer computers

The risk that Ord judges to be the most likely to end humanity is unaligned artificial intelligence. If you haven’t heard the argument, it is that it is hard to give artificial intelligence objectives which completely align with our own. You might ask an artificial intelligence to reduce carbon emissions, and it might accomplish the task by killing a large number of people. As others like Robin Hanson have pointed out, some writers, including Ord’s in the precipice, characterize AI of the future as having god-like powers to do whatever it wills. Of course we would not like our lives to be commanded by superhuman demons.

Ord writes ‘Sceptics of [unaligned AI risk] sometimes quip that it relies on an AI system that is smart enough to take control of the world, yet too stupid to recognise that this isn’t what we want. But that misunderstands the scenario.’ He then writes how it is natural to imagine AI with goals different than people. Ord here is missing the point.

Many philosophers believe in some idea of moral progress, that as time passes people are discovering moral truths in the universe. Ord both argues that more people will realize the correctness of caring about the distant future as we discover more moral truths, and that an important step in our future will be the Long Reflection, when humanity spends time figuring out the moral truths of the universe. Imagining a god-like superintelligence, it isn’t clear why this being wouldn’t be able to discover the moral truths of the universe better than we mere mortals. It is true that super powerful artificial intelligence may not share our goals, but why should we assume it won’t actually pursue even better ones. It can do everything else better.

Our potential

According to the final chapter, human potential is a lot more people over a long time, living lives much more pleasurable than our own, and exploring space. While none of these things are bad, they also don’t inspire me much. The book asks me to make sacrifices for these people who will have much richer lives than mine! The long future is nice, but it’s loss wouldn’t be worrying if the existing people today didn’t care about it. Ord deeply cares about the distant future of humanity. That is a great reason for him to write The Precipice. But I think it also reasonable for me to spend my charitable resources on reducing the suffering of people who actually exist.
Profile Image for Steven.
20 reviews
October 11, 2022
We are at a precipice, climate change, engineered pandemics, nuclear war and AI risk are only the known risks to our potentially vast and flourishing future. We are an adolescent, our physical capabilities outstripping our wisdom. The choices we make now will determine humanity's future.

Great companion piece to What We Owe the Future. They share lingo but focus on different points. I appreciated this book's focus on global governance and technological caution. The final chapters were more appealing to me as I'm more of a sucker for hard sci-fi and the call to action called out doomerism is death by defeatism and that the future is brighter than imaginable.

240 pages of content, definitely recommend reading the footnotes!
Profile Image for Kyran.
21 reviews1 follower
April 3, 2020
Solid, scholarly work that provides a helpful framework for thinking about ‘existential risk’ – major and irreversible damage to humanity’s potential (ie extinction or permanent civilisational collapse). Covers topics like nuclear war, climate change, pandemics and unaligned artificial intelligence. The book’s coverage of the last two are particularly interesting, and Ord believes the latter to be quite significantly the most dangerous of the risks presented (to an extent which surprised but mostly convinced me). This is an emerging cross-disciplinary field and The Precipice is a great introduction to the issues at stake.
Profile Image for Max.
69 reviews14 followers
May 3, 2020

Interviewer: Suppose that your life's work ended up having a negative impact. What's the most likely scenario under which this might happen?

Ord: [...] I think people underestimate how easy this can happen. [...] The easiest way this could happen is your work crowding out something that is better. I thought a lot about that when writing this book. I really wanted to make sure that there wasn't anyone else who should write this book instead of me, I talked to a lot of people about it. Because even though you produce something really good, if you crowd out something extremely good, your overall impact could be extremely negative.

This should give you an idea about the level of care Toby Ord took with this phenomenal book. The only imaginable extremely good book that got crowded out by "The Precipice" is "The Precipice - Now with Improved Footnotes". There are so many of them. In the end I just read all the footnotes after finishing a chapter, as to avoid the disappointment of another source citation or "I owe this point to Nick Bostrom".

I'm deeply impressed with Toby Ord and the research team behind him. This book does a great job at making the importance of our long-term future palpable and sketch out the risks that we have to overcome in the coming centuries. I think I'll leave it at that. Just another quote that I just "loved" and that's emblematic of the book, and also of the grace with which humanity is handling existential risks so far.

A technician [at a bioweapons lab in one of the Soviet Union’s biggest cities, Sverdlovsk,] had removed a clogged air filter for cleaning. He left a note, but it didn’t get entered in the main logbook. So they turned the anthrax drying machines on at the start of the next shift and blew anthrax out over the city for several hours before someone noticed[, killing at least 66 citizens]. In a report on the accident, US microbiologist Raymond Zilinskas (1983) remarked: ‘No nation would be so stupid as to locate a biological warfare facility within an approachable distance from a major population center.’

Humanity today showcases too much "No civilized species would be so stupid as to". Toby Ord did a great service to helping us make some smart corrections to our path going forward.
Profile Image for Kolumbina.
838 reviews25 followers
May 13, 2020
A well written book by Australian researcher, Toby Ord (works and lives in England, in Oxford), about potential, existential risks which humanity could experience. Really interesting, a rich book, heaps of references and appendices, an easy read, I found it very disturbing. A useful book, everyone should read it, especially now, after coronavirus pandemic.
1 review2 followers
December 9, 2021
Most of this book is excellent – a lot of interesting analysis, a lot of inspirational value. But in some parts, the author tries to argue for the book's bottom line in ways that's just not supported by the arguments. Don't fall for it.
Profile Image for Patrick Kelly.
284 reviews12 followers
April 13, 2020
The Precipice
By Toby Ord

- [ ] An effective altruism book
- [ ] We have time but it is running out
- [ ] Is there hope? Toby is optimistic but worried
- [ ] In all areas literacy, life expectancy, quality of life, standard of living, etc. There have been dramatic improvements across the history of the human species. Each revolution (agricultural, scientific, industrial, technological) has brought about these improvements. Before the industrial revolution 19 out of 20 people lived in less than $2/day, now it is only 1 out of 10. Before the scientific revolution, few people knew how to read, now there are few that don’t. These revolutions have dramatically improved human life. (One could also make the argument that capitalism instead of communism/fascism has been a great driver of this improvement) We are destroying the planet and destroying ourselves, but our lives have gotten better.
- [ ] The world changed when the atomic bomb was created and dropped. We have these weapons and the longer we live, the more likely that they will be used again.
- [ ] I live in a post Cold War world, nuclear weapons have never been a big threat. I always laughed at Iran/North Korea getting them. But the more I read Sagan/Effective Altruism/foreign policy stuff, the greater I take the issue of nuclear weapons and nuclear nonproliferation
- [ ] He thinks that in the 20th century there was a 1:20 chance of extinction. In the 21st century there is a 1:6.
- [ ] A moral argument/awareness of future generations. An obligation to survive. There are far more people with potential to live then there are people that have lived. By destroying ourselves now, we are destroying the future. It is the future that we must think of and we owe it to our ancestors to pass it on
- [ ] It is very spiritual/aware/beautiful
- [ ] His parents tell him that he does not repay them, he just passes it on
- [ ] We have to think about the collective, of the whole, the group, and not the individual survival
- [ ] Existential event
- [ ] Civilization collapse event: an event where civilization collapses, similar to the events described by Graham Hancock. Ord states that even with Europe losing 25-50% of its population due to the Black Death, civilization did not collapse and Europe recovered. It’s unlikely that civilization could completely collapse and not recover, if it did then extinction would follow. Based on what I have read from Hancock, I don’t fully support Ord here
- [ ] Extinction event: humanity is lost and all or almost all humans die. This could come in a single event (nuclear war) or a combination of events (pandemic/economic collapse/climate change)
- [ ] Ord believes that we have the potential for millions of future generations and to live for billions of years
- [ ] Sagan and two others are mentioned for their work on global catastrophic risks/nuclear war/our cosmic significance. (I have to check who the other two people are)
- [ ] Arguments about why global catastrophic risks (gcr) are not funded more/given enough attention/etc. Relevance, personal impact, politics, etc.
- [ ] Now on to the issues
- [ ] Asteroid/comet threats are the best addressed and studied issue. They are well funded, well understood, and given the proper attention. We have actually identified 95% of the 1km-10km objects and almost all of the 10km objects.
- [ ] Super volcanos
- [ ] The precipice began at the trinity test
- [ ] Another big section on nuclear war/nuclear winter
- [ ] Climate change: he does not see climate change being a direct cause for existential risk, because he believes that even in the worst case some humans will survive but believes that other factors around climate change could be a cause.
- [ ] There is a thing about humans needs a proper temperature to regulate our body temperature with regards to sweat and our system. It has to do with a combination of humidity and temperature. There will be a point where certain parts of the world are uninhabitable with AC.
- [ ] There is the possibility of 9-13 degree rise by 2300. The climate models are highly unpredictable and there are many factors to consider.
- [ ] Runaway green house has effect. Permafrost will partially melt and that will be a factor in climate change. Will be at least produce double the emissions that we already have by 2100, with the possibility of producing 5-13 times more.
- [ ] Biodiversity loss
- [ ] Climate degradation
- [ ] The climate stats and factors are extreme. I disagree with him and I believe it is an existential risk.
- [ ] There is a big problem with climate change in that we could be currently locking ourselves into a situation where we can’t reverse and the situation will be existential and we are powerless to stop it

- [ ] Pandemics: the Black Death killed 25-50% of the European population. Europeans coming to the new world could have killed up to 90% of native Americans and 10% of the world population.
- [ ] Bio weapons have been used throughout human history
- [ ] 15 countries have create biological weapons programs with Russia being the biggest and at one point employing over 9,000 scientists on it. Why the fuck is this not talked about more?!?!
- [ ] Biological weapons, trying to make diseases more deadly and easier spread
- [ ] Biosecurity is weak and in the past 50 years there have been multiple instances of small pox and anthrax getting out of the lab and infecting people. This is terrifying
- [ ] There is little investment and weak accountability in biosecurity. The Biological Weapons Convention is weakly enforced and not well funded. This book has made me realize how big of an issue biological weapons are and how underserved they are.
- [ ] Normal Borlock and his wheat may have saved billions of lives
- [ ] AI: the people that are most worried about it are the AI experts. AI could effectively become immortal by hiding in computers around the world. It could access bank accounts, social media, government systems, and surveillance. It could then blackmail and pay off any one in the world. This sounds like Ultron. Fuck, I don’t understand AI but every time I learn about it, it sounds terrifying.
- [ ] Most experts think it is decades away, no years. Many think that there is a 5% chance of existential risk from AI. That is an insanely high number, there is a 5% chance that the ultimate goal of AI is an existential risk
- [ ] Anthropogenic caused risks
- [ ] Totalitarian dystopian regimes
- [ ] Unexpected risks caused by humans are a big threat. IE: the next nuclear bomb or a yet undeveloped technology

- [ ] Here are the numbers:
- [ ] The five biggest threats over the next century: nuclear war, climate change, environment degradation, unforeseen human created risk, AI
- [ ] He puts AI at 1:10
- [ ] Overall possibility of existential risk event by the end of this century - 1:6 but it can raise to 1:3
- [ ] He does have hope and believes that humans can pull back from the brink but it will take effort. Currently we are playing a game of Russian Roulette
- [ ] There is some simple math that he is using but I don’t have the details for that

- [ ] Some effective altruism principles, mentions possible careers, and how to organize risk management/addressing risks

- [ ] Our current goal is to prevent an existential event, once we do that then we can secure our long term future. But between them is something called ‘the great reflection’, a time when humanity can reflect and chart a collective path forward. Once we are secure, then we choose where we go.
- [ ] This seems like philosophical idealistic babble. It sounds like an ideal world where humanity can come together and take steps forward. I don’t believe this will ever happen. I don’t believe that we will ever get rid of existential risks and I don’t believe that we will ever come together and chart a collective path forward. I only believe this can happen after a massive cataclysmic event that forces the remaining bits of human civilization to reckon with what it has done and then in solace move forward as it is crippled. Basically when the United States decided to go on the offense in the Zombie War and eventually rallied most of the world to do the same.
- [ ] Tragedy of the commons and game theory Believes that the pace of technology is outpacing our ability to slow it down. Thus we must build altruistic/positive technology to prevent existential risks, instead of building those that cause it
- [ ] Some 80,000 career stuff
- [ ] We are a young species. The horseshoe crab has had an unbroken line for 400 million years. Blue green algae has been around for over two billion. Wait when did life first appear and when did complex life appear?
- [ ] He starts to talk about the cosmos, exploring the universe, the massive potential long term survival of our species on hundreds of millions of years scale, he mentions Pioneer and Voyager, he sounds hopeful, he is writing like Sagan and even using similar phrases. This is the first time I have really seen Sagan’s writing/professional/non celebrity work directly influence another academic
- [ ] The average life time of a species is 1-10 million years. Ord is saying we have the potential to live billions of years, almost as if our species can become effectively immortal (if we prevent existential risk) I have never heard these ideas seriously discussed
- [ ] The observable universe is spread out over 46 billion years
- [ ] Explore our entire galaxy in 100 million years
- [ ] This is all fun to imagine/play with but the reality is that we have 1:6 chance in an existential event happening in the 21st century. Fuck the stars and exploring the universe, we have to make our stand now and fix/save/prevent existential risk now, here on earth

- [ ] He does end on a hopeful note. He is cautiously optimistic about our future. He loves humanity and loves our potential. He loves what we can do and could do. He speaks about the love he has for his daughter and all of the beautiful moments that life has. That there can be and will be more. That if we prevent existential risks, there should be nothing stopping us. He even talks about our evolution, the possibility of biotechnology, a cognitive evolution. He is very aware of animals, different experiences, things that we don’t know but could, he thinks of the collective. In many ways he is a ‘cosmic humanist,’ similar to Sagan. I am a fan

- [ ] I enjoyed this book and would recommend it to others. It is assessable, well written, direct, dire, clear, and hopeful. It is not arrogant, off putting, or highly academically written, in ways that other EA literature is. I would like to see the EA movement follow Toby’s lead
Displaying 1 - 30 of 496 reviews

Can't find what you're looking for?

Get help and learn more about the design.