Jump to ratings and reviews
Rate this book

What We Owe the Future

Rate this book
An Oxford philosopher makes the case for "longtermism"—that positively influencing the long-term future is a key moral priority of our time.

The fate of the world is in our hands. Humanity’s written history spans only five thousand years. Our yet-unwritten future could last for millions more—or it could end tomorrow. Astonishing numbers of people could lead lives of great happiness or unimaginable suffering, or never live at all, depending on what we choose to do today.

In What We Owe the Future, philosopher William MacAskill argues for longtermism, that idea that positively influencing the distant future is a key moral priority of our time. From this perspective, it’s not enough to reverse climate change or avert the next pandemic. We must ensure that civilization would rebound if it collapsed; counter the end of moral progress; and prepare for a planet where the smartest beings are digital, not human.

If we make wise choices today, our grandchildren’s grandchildren will thrive, knowing we did everything we could to give them a world full of justice, hope and beauty.

335 pages, Hardcover

First published January 1, 2022

Loading interface...
Loading interface...

About the author

William MacAskill

8 books651 followers
I'm Will MacAskill, an Associate Professor in Philosophy at Lincoln College, Oxford, and author of Doing Good Better (Gotham Books, 2015). I've also cofounded two non-profits: 80,000 Hours, which provides research and advice on how you can best make a difference through your career, and Giving What We Can, which encourages people to commit to give at least 10% of their income to the most effective charities. These organisations helped to spark the effective altruism movement.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
1,485 (28%)
4 stars
1,960 (37%)
3 stars
1,263 (24%)
2 stars
347 (6%)
1 star
108 (2%)
Displaying 1 - 30 of 632 reviews
Profile Image for Rick Wilson.
805 reviews318 followers
September 16, 2022
This book is like marzipan. Interesting to look at, but the actual substance tastes kind of like Styrofoam

This is a fun book in concept. I agree that we should think in longer-term increments. I’ve noticed as I’ve matured, my thinking from childhood has developed from days, to weeks as a teenager, to months and years as a younger adult. It makes sense if you follow sort of a linear progression, as people and civilizations mature, they should think and longer and longer timespans.

cool. Love to see it. Would have loved a good book on it.

Unfortunately. This book is more of a collection of vague notions, ideas and half baked philosophical musing. The core idea could and should have been condensed down into a blog post. And the rest of it is just regurgitated stuff about climate, AI, population growth and whatever else managed to fill some pages. I don’t wanna knock Oxford based upon two guys, but it honestly really struck me as having lotta overlap with Nick Bostroms Superintelligence as just being fundamentally out of touch with the world as I understand it. It’s this super artificial model that misses what I see as many of the important in salient details around how the world actually operates. Something about Ivory Towers and maybe I’m the one actually missing something because of how far my head is stuck up my own ass.

Again I like the idea. I don’t think anyone’s going to argue that we have a debt to future generations if we want to see the world change for the better. That our changes today can echo fourth and cause great effects. in that spirit, it would be nice to have some actual helpful guide posts on how to do that. So I guess that pivots to my criticism and why this book shouldn’t be that guide.

One huge issue with this book is the idea of using probabilistic thinking with these huge long tail events. i’m not sure I’ve spent enough time really distilling my thoughts here. But Bayesian statistics don’t work with describing these events. Saying there’s a 10% chance of a catastrophic event happening doesn’t actually mean anything. You’re using the wrong tool to describe the event. To maybe bungal through a pseudo-explanation of my issue here. Statistically it’s infinitesimally unlikely that you win the lottery but on the off chance that you do all prior statistical probability is moot. Like you shouldn’t play the lottery the same way you shouldn’t count on being bitten by a radioactive spider. But somebody does win. It’s that same sort of breakdown when we start talking about these huge future events. I don’t give a shit if there’s a 2X chance of the Yellowstone volcano blowing up over a meteorite impact. It’s like using college rankings to compare schools. It doesn’t really work and it’s probably harmful for maximizing utility.

Don’t get me wrong we should have some sort of way to talk about these events. If they happen, it’s really bad. But statistical probability is not the right way to try to contextualize this.

One second issue I have with kind of the general arc of this book is that it starts by talking about thousands of years and billions of people. But most of the stuff ends up being really mundane. Like the meat of the book is about population growth and climate change. Other people have done it better.

I also think this book glasses over the fact that population decline is a much bigger issue than most people give credit to. it seems likely to me that the fact that most industrialized nations are in essentially a cycle of population decline have much larger long-term effects than are accounted for here.

So we end up with the author starting to say “we have to account for future generations“ but then it all of a sudden transitions to only thinking about the next 20 or 30 years. There’s no real framework that we can fit any of this in it’s just kind of a rough tour of a variety of ideas loosely related to what the author thinks is going to happen.

Attempting to be constructive, And maybe this is my biases leading too much into this, but I see a glaring hole in a lack of framework and structure around decision-making and the decreasing levels of certainty for all of this stuff. Not to be too much of a Hari Seldon fanboy, but you could set up a log timeline around 10, 50, 100, 500, and 1000 year bounds, Account for your assumptions and the increasing chance of seeing them proved wrong. Use your pseudo mathematical mumbo-jumbo to help create some bands in probability. Mix in some complexity theory. And at least you can get everyone talking the same language when discussing this type of stuff. And you end up with something that actually strikes closer to being practical than this book could ever hope. I think that would’ve greatly improved this book.

Conceptually, and without those structures, you’re essentially left with what amounts to a narrative device in speculating wildly about how things will impact the future. I have a huge issue with the high level positioning of this. Realistically this books premise can be used to justify any sort of plan you have. It’s a fresh way to claim a moral high ground by saying “well I may cause issues today but in 1000 years it’ll result in global utility and happiness, so my plan is actually better.” It’s faux-utilitarianism with a gym membership. “We have to optimize for eventual future people and that justifies XXXX plan I have.” It’s just a slightly improved version of, “the Keebler elves approve my message.” Without applications, It’s just philosophical masturbation.

Nobody knows. Right? We can try to guess but I think to really talk about this there’s a really high level of humility needed and that is not something I sensed at all in this book. Instead we get into sort of a pseudo-mathematical “utility of one life if they’re happy is +1 but -1 if they’re unhappy.” It doesn’t work. It’s a sort of philosophical navel gazing that makes me hate philosophers. And it’s the sort of seemingly rigorous framework without actually having any rigor behind it, a laundering of shitty ideas using mathematics that also makes me hate philosophers. MacAskill spent so much time on this and I thought it was the worst section of the book.

So what I’m left with is essentially an interesting thought experiment. And a bunch of really mediocre conclusions and implementations of it.

If we’re going to speculate, let’s speculate. Talk to me about alternative governments like David Graeber, utopian ideals like Bregman. If you’re a philosopher guy, stand up on the shoulders of the Giants in that field. Don’t be constrained by what we have here right now. I can’t predict the future any more than this guy can but I do see a glaring need for a level of optimism and ideas about where we’re going as a society. This book could’ve been really impactful.

Dystopic Ai overlords, increasing drumbeats in the media about Civil War in America, The very real decrease of American hegemony worldwide, Mars and space. These are things that can be scary, and they probably should be viewed with some level of trepidation. They represent a radical change in the world and our lives. I think there is sort of a spiritual, divine, need of purpose, an Olivia Butler earthseed-esque sized hole in the world.

This book positions itself to tilt at that windmill and then it decides to play in the mud, splashing at the reflection instead of galloping towards the rotating sails.

So ultimately the problem is this book doesn’t go far enough. There’s not enough detail at times and too much at others that don’t matter. And I don’t think I make this leap unnecessarily, but this was an added brick into the wall of some of the frustration I have with EA. MacAskill is really involved in effective altruism, a movement at a high-level I agree with, but I also struggle with the details because for all of the wonderful ideals, seems like the best practical implementation they have is buying mosquito nets. Big ideals, a lot of empty talk, and a niche practical application. It’s an apt analogy to this book
Profile Image for Fin Moorhouse.
74 reviews108 followers
December 18, 2022
Extremely ambitious, and extremely persuasive.

Future people count, there could be a lot of them, and we can make their lives better. What We Owe the Future is a book about these three ideas, which come together in longtermism: the view that we as society should be doing far more to protect future generations.

When we take a million-year view on our place in history, what issues come most into focus? What matters most from this vantage point? And what can we do about it? MacAskill (plus a small army of researchers and fact-checkers) sets out to find answers, and the result is my favourite kind of book: sweeping, meticulous, sometimes delightfully counterintuitive.

One answer is that we should avoid completely destroying ourselves to keep what potential humanity has intact. MacAskill suggests, in this way, humanity is like an imprudent teenager:

Most of a teenager’s life is still ahead of them, and their decisions can have lifelong impacts. In choosing how much to study, what career to pursue, or which risks are too risky, they should think not just about short-term thrills but also about the whole course of the life ahead of them.

Existential risk is the subject of The Precipice: Existential Risk and the Future of Humanity. But in considering what priorities longtermism suggests, What We Owe the Future goes beyond just focusing on mitigating existential risks. It also introduces the possibility of lock-in: the thought that entire value systems could become entrenched for indefinitely long into the future. We know it's possible because it's already kind of happened — but there are also special reasons to worry about lock-in soon, since advanced AI could be used to very strongly enforce even values that nobody wants. Plus, the values that shape the future could be contingent: sensitive to choices we make right now, rather than just guaranteed and unchangeable — illustrated with the extraordinary story of how the Atlantic slave trade was finally abolished.

That all suggests another memorable analogy, which captures much of this book's message:

At present, society is still malleable and can be blown into many shapes. But at some point, the glass might cool, set, and become much harder to change. The resulting shape could be beautiful or deformed, or the glass could shatter altogether, depending on what happens while the glass is still hot.

Other highlights: the most readable intro to population ethics I've come across, an in-depth look at whether civilisation could recover from catastrophe (and why keeping coal in the ground could help), a defense of 'moral entrepreneurship', and facts about Glyptodons.

Glyptodon

What makes this book so special is that it amounts to a call to action. We face future-defining problems, yes — but we can do things about them. I am so excited about the prospect that some people could start with a kind of vague feeling of doom about the future, read this book, and take it as inspiration to start working on an effort to put the entire future on a better course.
Profile Image for Daniel Hageman.
339 reviews47 followers
August 28, 2022
It was admittedly a little tough to mark this as 3 stars, though I am rounding it down from 3.5 stars which I would give had I the option. This evaluation is less comparable to other books to which I've granted three stars, and is more an instantiation of the perhaps unjustifiably high bar to which I hold Will MacAskill.

All in all I think WWOTF is an exceptionally well-written book, connecting the abstract to the personal in a manner that is necessary to ring salient the narrative that Will is trying to convey to a broad audience about a topic that is more or less alien outside of the Effective Altruism sphere. As someone sympathetic to longtermism in the general sense, there were a few chapters that I was notably disappointed in, albeit this higher standard that I hold this book to (especially given the publicity it's received) is likely playing a role here.

Note, many favorable readers may find comfort in the fact that I am more convinced of a flavor of negative utilitarianism as compared to classical utilitarianism, with CU being a position that I previously held for many years since I could remember taking an interest in ethics many moons ago, but I would contend that most my critiques largely stand even from a CU POV.

The opening 4 chapters are decidedly quite excellent (though I'm still confused about his opening thought experiment and the equivalent amount of time one would live as a slaveholder vs a slave; I might have missed a relevant variable here), and it's great to see the issue of value lock-in discussed in a full-fledged text. This is the sort of robust worry that I think warrants serious reflection by those who want to influence the 'longer term' future.

In his Chapter 5, on extinction, Will makes a strong, largely classical utilitarian-based case to prioritize mitigation of extinction risks (as distinguished from x-risks or existential risks, more broadly). While I think there is a legitimate case for such prioritization regardless of one's normative ethical framework, the seeming omission of discussion about s-risks specifically, in any serious manner, presents itself as a major fault in this section. While various suffering-filled trajectories (especially from an anthropocentric POV) are covered, it seems that Will leaves a lot left on the table when it comes to sincerely thinking through the threatening trajectories the universe could take (modulo some discussion about misaligned AI), arguably leaving many readers with a sense of optimism, and dare I say pollyanna bias, very much in tact. **Note, I grant that s-risks could be somewhat esoteric and have less appeal to a general audience, especially given their often morose nature, but I lean towards thinking a more in-depth discussion would have nevertheless been warranted, all things considered. Hopefully the pending work of Tobias Baumann, and his upcoming book on the topic, will help fill this gap :)

Part IV of the book, about assessing the end of the world, includes both Chapter 8 and 9, covering population ethics and the expectation for the net value/disvalue of the future. In Ch. 8, Will does an overall impressive job elucidating the area of population ethics for a broad audience, and clearly intends to demonstrate a fair and level-headed overview of the various theories. That said, I found it puzzling that he expectedly spent time on the Repugnant Conclusion, but fails to mention the 'Very Repugnant Conclusion', as part of the case against Classically Totalist views in Pop Ethics. Given that Will has identified the VRC as the strongest argument against his personal Totalist view, back in August 2019 on the EA Forum, it seems odd that he would leave it out of his book, as he highlights the various counterintuitive conclusions that other theories must concede. **Note, he did not include negative utilitarianism, and related population ethics-relevant theories, in this section. Will wrote this chapter quite well, but given his statements in the past, this sort of omission was quite surprising. Further, regarding the issue of ‘making happy people’, the lack of more in-depth discussion on the ‘procreation asymmetry’ leaves much of Will’s conclusion on shaky ground. [*Edit - a bore thorough critique of this chapter can be found here, which makes my qualms seem borderline elementary :) https://forum.effectivealtruism.org/p... ]

In Chapter 9, I probably had the strongest disagreements with Will (if I keep calling him by his first name, it implies that he and I are best friends, right?), or at the very least found my disagreements most surprising, even given that I knew his preferred normative framework (classical utilitarianism, in the sense that increasing the pleasure of those already in a net positive pleasurable state, can outweigh the suffering, or increased suffering, of others in a net negative state). In this chapter, Will's willingness to hypothetically live the consecutive lives of every human who has ever existed, was quite shocking. Typically when one considers future torture-level suffering, they refrain from thinking their fluctuating every-day life makes such torture worth enduring (I have met those who genuinely think such lives that have non-nominal amounts of torture [read those last 4 words again..wow..) are justified, but there is a separate disagreement to be parsed through there, regarding the reliability of our evaluative judgements when outside the experiences themselves, the nuances around such considerations, etc.). Of course, living each life would entail, I presume, not knowing that you would live a next life with an arbitrary amount of suffering (it wouldn't be the same life, otherwise), and this creates some noise in the evaluation of such a thought experiment (and also renders it perhaps not as useful as it may seem, prima facie).

While the disagreement about how to evaluate various durations of experienced human suffering would require an in-depth conversation about how human psychology relates to various valence states of suffering and pleasure, it might have been some of Will's comments about non-human animals that struck me as most surprising in this chapter. While he did support the notion that the net well-being in the lives of farmed animals was plausibly negative, the manner of his juxtaposition of this moral weight, to that of the moral weight of humans, was quite unexpected. His actual views in this section, where he frequently notes non-human animal 'interests' instead of 'hedonic well being', makes it difficult for the reader to decipher where the author stands (cue 'moral uncertainty'). Nevertheless, even his reference to neuron count, as a 'rough proxy' for moral weight, seemed a bit misplaced as he laid out how we ought to consider whether living all such animal lives seemed reasonable. Listing one such heuristic, without mentioning others that push in the opposite direction (Darwinian reasons for less cognitively inclined animals to experience greater depths of suffering/pleasure), seems unreasonable in this instance.

And lastly, in Chapter 9, he comments on the issue of wild animals. And while Will does a laudable job in highlighting that the lives of wild animals are likely not the 'rosy picture' that so many instinctively layer onto their evaluations of such lives, his takeaway that wild animals' lives are 'at best highly unclear' with regards to net well being, is quite surprising to me, and let's 'at best' to do some serious heavy lifting. Admittedly, this could be a result of my diverging intuitions about the relative badness of extreme suffering, and the fact that Will's concluding remarks in this section states that it's plausible, albeit uncertain, that wild animals have net negative lives, leaves me somewhat at peace with this section given the target general audience.

Rounding things out in this chapter, Will's optimism on the likelihood of eutopia, over anti-eutopia, are plausibly a further degree of differing confidences in how humans will interact with each other, nature, and technology in the years to come, but nothing that I can adequately defend at the moment beyond pointing to resources about the seriousness of s-risks and the moral luck that has guided us to even this current point in history (a point which, upon reflection, is a far cry from warranting a moral pat on the back).

**Relevant quote from Magnus Vinding**

“I would argue that we are [living in a dystopian nightmare] (although I acknowledge it could get far worse). Factory farming (of land-living animals: >70 billion victims a year, has been growing exponentially; invertebrate farming is now starting to emerge), fish farming (>100 million victims annually), human fishing in the wild (1-3 trillion victims annually), wild-animal suffering (10^15 victims annually if we're just talking about vertebrates). From a non-anthropocentric perspective, this is worse than any dystopian nightmare I've seen described in fiction.”

All in all, I respect the hell out of Will MacAskill, and think that everyone should read and reflect on this work and his others. Attempting to critique the work of someone much smarter, kinder, and more altruistic that myself is never a fun thing (albeit intellectually stimulating, perhaps), but I do so knowing that he also has many other fans and followers that will equally be promoting the extent to which they agree with his way of thinking on these topics, so I find comfort outside any worry that I may be too harsh.
Profile Image for Tom Mooney.
713 reviews229 followers
September 13, 2022
This comes across as deeply flawed and incredibly privileged, particularly being published at a time when many people even in rich western countries are wondering how they'll make it through winter.

MacAskill's big idea - that we should act and plan to benefit the potentially trillions of humans yet to be born - on paper sounds interesting and potentially persuasive. The idea that, were a building burning and you could save one person or ten, you'd save the ten.

Problem is, Will old pal, if the one was my kid, I'd let a billion people burn rather than them, let alone ten. And that's a human instinct you'll never change. (In fairness, he does address this impulse, but in my view is pretty dismissive of it).

MacAskill's blueprint is also rife with things I disagree with - he seems to advocate for a sort of kinder capitalism as a solution. Donate some of your wages to progressive charities, get educated in important sectors, pick a career in a progressive future-proof industry... Etc etc. I mean, come ooooon. How out of touch is that? Higher education is a privilege the vast majority of the world doesn't have access to, for a start. Great for Will, he earns a bunch of cash working at Oxford University, he can donate his cash to funky charities. But for most of us, that isn't even remotely an option. I'd prefer we burned capitalism to the ground in favour of something better.

Out of touch, privileged, and wrong.
Profile Image for Matt Lillywhite.
175 reviews71 followers
December 8, 2023
“This book is about longtermism: the idea that positively influencing the longterm future is a key moral priority of our time."

I love the front cover of this book and it was recommended by Ali Abdaal. So I was excited to read What We Owe The Future.

The author, William MacAskill says there could be a lot of future people. Billions. Way more than exist right now and have ever existed in the past. So, when making decisions about our life and planet, we should keep future generations in mind. After all, they will experience the positive (or negative) impact of our decisions.

5 stars.
Profile Image for Michael Chen.
7 reviews5 followers
January 27, 2022
I’ve read a draft of the introduction and first chapter and wow, MacAskill's writing here is incredibly beautiful and compelling. Can’t wait to read the rest of the book when it comes out in August.
Profile Image for Tobias Leenaert.
Author 2 books149 followers
August 30, 2022
Quite a good read, though I'm not sure why Toby Ord and Will Macaskill didn't combine forces to write a book together that is The Precipice and this one combined. Reading both feels a bit redundant in parts.
Still, well written and researched, thought-provoking, and positive.

Some points from it:

* We need to keep in mind that the future will probably be vastly bigger than the past: soooo many more people (and other living beings) might live there compared to the beings that are presently living or have lived. Therefore caring about future beings is of incredible moral importance

* We might be living in an era of great "plasticity", where we can still change this future. Later, like molten glass, it might set and be far less changeable

* We should pay particular attention to new challenges as they arise, before they are out of our control (e.g. AI)

* Some events might be highly contingent, which means that they were not inevitable. The abolition of slavery, for instance, might not have happened at all.

* Britain abolished slavery and enforced abolition worldwide at significant cost. This campaign has been called "the most expensive international moral effort in modern history"

* It's totally fine to have weird (i.e. revolutionary) ideas, but that doesn't mean we should engage in weird actions that alienate others.

* Particular values and ideologies can get "locked in" for hundreds or thousands of years (e.g. Confucianism). It is possible that in the future we could have very harmful ideologies locked in for a long time, e.g. with the help of AI.

* (Technological) progress depends on how many scientists and other people we have, engaging in particular fields. A smaller population means less experts, and potentially slower progress.

* It's possible that progress is slowing down and that good ideas are harder to find. The difference between 1920 and 1970 in terms of progress was a lot bigger than that between 1970 and 2020 (e.g. time to get around the world). Stagnation is possible

* Once we started burning fossil fuels, further tech progress was the only hope for giving us a shot at averting a climate catastrophe without falling back to pre-industrial levels of material hardship

* People have different intuitions on whether it is good or neutral to have more (happy) people in the world. Personally I still am indifferent to a billion or ten billion happy people, but while MacAskill initially seemed to think so too, he now thinks that more is better. This is a question within the domain of "population ethics", a notoriously difficult field in philosophy, and it's a question that has not been solved.

* It's hard to assess if we're getting happier overall, but McAskill thinks so, based on the available research

* Re. non-human animals, the lives of farmed animals have been getting worse. The wellbeing of wild animals is hard to assess for now.

* Putting a child into the world is a good thing, according to the author.

* We should try to take actions we can be comparatively confident are good; increase the number of options open to us; and always try to learn more

* When we want to change things, focusing on our own or others' personal behavior is "a major strategic blunder", though some consumption choices (like going vegetarian or vegan) have a bigger impact than others (like trying to reduce plastic). Donating and political activism can be much more impactful ways to make the world a better place

* The biggest positive impact we can have is with our careers.

Profile Image for Jacob Williams.
512 reviews11 followers
September 10, 2022
What We Owe the Future changed my thinking on two things.

1. The contingency of moral values. Previously, I believed the most important driver of moral progress in society was economic and technological progress: people become more willing to do the right thing as the sacrifices they have to make to do so become smaller. MacAskill looks at the abolition of slavery in the British empire as a case study for this claim, and notes it's not the dominant view among historians. I was surprised to learn of the economic cost Britain accepted:

In the years leading up to abolition, British colonies produced more sugar than the rest of the world combined, and Britain consumed the most sugar of any country. When slavery was abolished, the shelf price of sugar increased by about 50 percent, costing the British public £21 million over seven years--about 5 percent of British expenditure at the time.


and:

The British government paid off British slave owners in order to pass the 1833 Slavery Abolition Act, which gradually freed the enslaved across most of the British Empire. This cost the British government £20 million, amounting to 40 percent of the Treasury's annual expenditure at the time.


MacAskill presents the case that abolition depended on a drastic change in moral beliefs, which in turn depended on both the long-term efforts of an initially small group of activists and a great deal of good luck. Relatively small deviations from our actual history could have resulted in slavery continuing to be viewed as morally acceptable now and indefinitely into the future. (And even if abolition were more or less inevitable, how long it took to happen might have been very contingent - perhaps slavery could have persisted for centuries longer.)

So, explicitly trying to change society's values can have enormous payoffs. Such efforts may also be less susceptible to one of the main objections that has generally made me skeptical of attempts to improve the long-term future - the unpredictability of long chains of cause and effect:

...from a longtermist perspective, [values changes] are particularly significant compared to other sorts of changes we might make because their effects are unusually predictable.

If you promote a particular means of achieving your goals, like a particular policy, you run the risk that the policy might not be very good at achieving your goal in the future, especially if the world in the future is very different from today, with a very different political, cultural, and technological environment. You might also lose out on the knowledge that we will gain in the future, which might change whether we even think that this policy is a good idea. In contrast, if you can ensure that people in the future adopt a particular
goal, then you can trust them to pursue whatever strategies make the most sense, in whatever environment they are in and with whatever additional information they have.


2. The risk of technological stagnation. This is a concern I hadn't really been exposed to before:

...as we make technological progress, we pick the low-hanging fruit, and further progress inherently becomes harder and harder. So far, we've dealt with that by throwing more and more people at the problem. Compared to a few centuries ago, there are many, many, many more researchers, engineers, and inventors. But this trend is set to end: we simply can't keep increasing the share of the labour force put towards research and development, and the size of the global labour force is projected to peak and then start exponentially declining by the end of this century. In this situation, our best models of economic growth predict the pace of innovation will fall to zero and the level of technological advancement will plateau.


Based on a 2020 University of Washington study, MacAskill thinks population will actually decline, and notes that "[f]or twenty-three countries, including Thailand, Spain, and Japan, populations are projected to more than halve by 2100; China's population is projected to decline to 730 million over that time, down from over 1.4 billion currently." This seems pretty scary to me: intuitively, if there are fewer people to divide humanity's labor among, then each person has to take on more work and/or lower-priority work just won't get done. I find it easy to imagine society's willingness to fund speculative research declining in such a scenario.

Why should we care? Well, I like technology. But MacAskill gives some less biased reasons to worry, including the possibility that we are nearing an especially dangerous time to stagnate:

We are becoming capable of bioengineering pathogens, and in the worst case engineered pandemics could wipe us all out. And over the next century, in which technological progress will likely still continue, there's a good chance we will develop further, extremely potent means of destruction.

If we stagnate and stay stuck at an unsustainable level of technological advancement, we would remain in a risky period. Every year, we'd roll the dice on whether an engineered pandemic or some other cataclysm would occur, causing catastrophe or extinction. Sooner or later, one would. To safeguard civilisation, we need to get beyond this unsustainable state and develop technologies to defend against these risks.


Toward the end of the book, MacAskill suggests that one good way to help the future is simply to have children. It seems to me that if his argument is correct, trying increase fertility rates around the world could be an important line of research for longtermists to pursue. Speaking personally, the expense and (especially for young children) work effort involved in parenting scares me away from it - it feels like I'd have to accept both a dramatically higher stress level for many years, and a drastic reduction in the time I invest in other pursuits that I care about. If this is a common attitude, but declining fertility rates are a threat to society as a whole, then we should be looking for social innovations that make parenting significantly less burdensome.
Profile Image for Frantisek Spinka.
19 reviews3 followers
November 1, 2022
I do not usually rate books, but this is an exception. And I am doing this simply to say the following: Don't read this book. Bunch of conjectures that hardly amount to any kind of serious argument. It seems that the reason we should believe them consists mainly in the incredible amount of researchers MacAskill consulted while writing the book.

For example. How the hell is the stagnation of human race and its well-being measured primarily in technological progress? Why does MacAskill dispose so quickly of the notion of partiality? Additionally, measuring everything by expected-value just seems too simplifying for the complexity of decision-making humans do. MacAskill never acknowledges this. Take his discussion of nuclear holocaust, which would wipe out 99 percent of humanity. The expected-value theory seems to say that what rules our decisions in this case is simply the value of the human civilization surviving. The issue with this, of course, is that it completely does away with the value that already exists, that is "out-there" in the world. The value of those who lost their lives is, on the expected-value theory, basically irrelevant.

The discussion of the so-called 'fragility of identity' is also puzzling. MacAskill basically makes the banal observation that identity changes with every kind of decision you and other people make. But he reads this in a very strong way, failing to differentiate between more and less stable aspects and features of identity. Of course, decisions make changes to identitites, but simply saying that me not-taking the bus to work completely changes who I am as a person and what human civilization amounts to (yes, that is his example) is just plain nonsense.

Lastly, measuring future well-being is notoriously tricky. Well, not for MacAskill, who (as a vegetarian) considers the suffering of animals to be bad, but then quickly adds an argument that is suppose to establish that it does not really matter that much, because the quantity of animal suffering is irrelevant in comparison with the quantity of potential human suffering. And why is that? Well, human well-being simply counts more because - brace yourselves - humans have more neurons and thus receptors of pleasure and pain. Huh. I am quoting:

„If, however, we allow neuron count as a rough proxy, we get the conclusion that the total weighted interests of farm land animals are fairly small compared to that of humans, though their wellbeing is decisively negative.“

I am sure that measuring suffering by the amount of neurons one has would be something bunch of people would not subscribe to. Among those doctors, psychotherapists, animal rights activists etc. And I think there is bunch of good reasons for being skeptical of it (the importance of qualia, subjective receptivity, the supposed commensurability of quantity of neurons and quantity of felt pleasure and pain etc.)

I sincerely discourage you from reading this book and encourage you to read Parfit, who apparently heavily inspired this book, but who at least offers intriguing arguments.
Profile Image for Max.
70 reviews14 followers
August 28, 2022
Smooth, inspiring, very little effort to read through. Would tell my past self to get the audiobook. Overall I learned less than I hoped (but note that I'm already relatively familiar with the literature around longtermism) and e.g. found the treatment of the contingency of moral values overall a bit unsatisfying. But I think it's a great introduction to the topic of longtermism. Would probably recommend Toby Ord's The Precipice: Existential Risk and the Future of Humanity for people who are interested in a more penetrating treatment of risks to our future.
Profile Image for Matt.
Author 13 books45 followers
September 8, 2022
I read this book shortly after finishing MacAskill's earlier book, Doing Good Better. That earlier book focused on the idea of "effective altruism" - how to do the most good possible with one's time, talent and treasure, by focusing on issues that are important, tractable, and relatively neglected. So, for instance, even if cancer is a major problem, we might do better by giving money to fight malaria since the fight against the latter disease receives far less financial support than the former. And rather than volunteering for Habitat for Humanity, you might put that degree in finance you earned to good use, get a job at a hedge fund, and simply donate the gobs of money you earn to effectively run charities.

In some ways, this book is an extension of MacAskill's earlier work on effective altruism. In What We Owe to the Future, MacAskill argues for a view he calls "longtermism" - which is the view that we should take very seriously how our present action or inaction is going to affect future people, and try to ensure that we act so as to make the future as good as possible. Longtermism seems to follow from effective altruism fairly straightforwardly. If you want to do the most good possible, you should (all else being equal) focus on those activities that benefit the largest amount of people. But there are, at least potentially, a very, very large number of people who will exist in the future. Many more than exist right now. So rather than worrying about anti-malarial bednets, perhaps we should be worried about all of humanity ceasing to exist, forever, because of a catastrophic pandemic or a nasty strand of artificial intelligence.

There's an interesting question here about trade-offs, which MacAskill never really faces directly. In some cases, helping people now and helping people in the future might call for the same action. Developing cleaner forms of energy might be such an instance. But surely in some cases these two ideals come into conflict. What if helping the poor now means slowing down economic growth, which means that future people are less well-off than they could be? If the future is large enough, then a strict utilitarian calculus would seem to suggest that even *extreme* measures of present sacrifice are justified if they yield even a slight improvement in average human well-being. Should we devote our entire lives to making the lives of future people, yet unborn, as happy as possible?

Other writers - like Tyler Cowen in his Stubborn Attachments - come pretty close to saying that we should. MacAskill seems to dodge the question. Perhaps, I suspect, because this book is as much a work of proselytization as it is philosophy. His goal is to win people over to longtermism, not necessarily to get bogged down in academic debates. But I view this as a missed opportunity.

I'm also a little skeptical of the way in which he avoids the overriding emphasis on economic growth that characterizes Cowen's position. MacAskill argues that economic growth cannot possibly continue indefinitely at his current rate. But his argument seems to me to rely on an overly physicalist understanding of economic growth. If economic growth simply meant making more stuff, then of course economic growth would be limited by the number of atoms in the universe. But that's not what growth is. Growth is about making things useful, not about making *things*. And, understood in that way, it's not clear that there are any inherent limits to growth. There might still be good reasons for not striving to maximize growth. But those reasons will have to be that it is not desirable, not that it is impossible.

Much of this book overlaps with earlier work such as Toby Ord's The Precipice. And in some cases - such as the discussion of the threat posed by Artificial General Intelligence - I found Ord's presentation more convincing. The worry is that AGI's values might not be properly *aligned* with those of humans. It's less clear to me that "lock-in" is a genuine concern. So while this might be the most popular "longtermist" manifesto, it is reflective of a general trend within the EA movement. I'm skeptical that this trend is a good one. We have reasons to care about the future, of course, but I suspect that there is something special about the kind of moral consideration we owe to people who actually exist. It will be interesting to watch the debate that will undoubtedly unfold in the wake of this book. I am certain it will be productive, interesting, and good for our species as a whole.

Profile Image for Петър Стойков.
Author 2 books299 followers
March 12, 2024
Утилитаризмът е малко странна философия, а още по-странни са хората, които я възприемат в нейния екстремен вариант. А какви са тия хора и какъв е този екстремен вариант можете да видите в What We Owe the Future.

Да създадеш максимално добро за максимален брой хора може да звучи добре на теория, но нещата почват да излизат от рамките на разумното, когато почнеш да разсъждаваш за бъдещите неродени милиарди хора, как да не си роден било по-зле от това да си роден, затова за да създадем максимално добро трябва да създадем максимално количество хора и да ги разселим из цялата галактика.

Това може да звучи като сюжет за научна фантастика, докато не видите, че авторът съвсем сериозно предлага да мислим за бъдещето по този начин и да се опитваме да му въздействаме с действията си, за да постигнем гореописания резултат.

А гореописания резултат може да се постигне, според автора, най-общо, с повече централизиран контрол върху това което прави, разбира се за да "се грижим" за идните поколения". До това се свежда книгата.

Толкова много имам да кажа по тоя въпрос, за това колко не съм съгласен с написаното в нея както морално, така и практически, че всъщност няма смисъл да опитвам. Самият хюбрис на автора както относно собствената му убеденст в моралната правилност и практическото приложение на идеите му, така и хюбрисът му относно възможността на човечеството и всеки о��делен човек да прав�� планове за бъдещето, които ще издържат сблъсъка със самото без съмнение чернолебедно бъдеще, са просто заслепителни.

Кофтито е, че много свръх-богати хора и хора с власт гледат на света по подобен начин и дават парите си и упражняват властта си в тази насока...
Profile Image for Carolina Zheng.
79 reviews3 followers
November 15, 2022
This book is an ambitious undertaking in subject matter, but falls short in actually developing its arguments. Some of MacAskill's claims are hard to believe: the whole section on AGI, for example, feels like speculation and his only sources are to cite surveys of AI researchers on when they think AGI will develop. His theory of cultural development as evolutionary fitness felt unbelievable to me and also had no sources. The philosophical sections on population ethics and wellbeing were interesting but left open-ended with a statement along the lines of "such moral questions are hard and inconclusive, but we should still be longtermists." Overall, while this book provides some food for thought, I failed to be convinced of much of anything beyond the general view that we should care about future generations, which I already held.
Profile Image for Dhruv Anand.
13 reviews1 follower
December 13, 2022
This is what happens when you reach the very top of maslow's pyramid of needs.
Some pretty unconvincing arguments for Effective Altruism.
Profile Image for Harry.
168 reviews19 followers
January 8, 2023
In Bavaria and Hesse in Germany there is a mountain forest called the Spessart which has, since the sixteenth century, been managed using a style of forestry called—in rather clinical modern terms—worst-first single-tree selection. That is, cutting trees surgically, one by one, so that the forest ecosystem remains strong, and always selecting the least-healthy trees so that the forest’s health improves, or at least remains steady, over time. As a result of this careful, frugal management the Spessart produces the finest oak timber in the world. On the north slope of the same mountains is a pine forest that has been clear-cut, ploughed, grazed, and used in various other ways for the same period of time. Now forestry has returned and it produces, in Aldo Leopold’s expert assessment, stands of “indifferent” Scots pine. The people who have managed the Spessart oak forest for the last five hundred years (or more—it was a Carolingian royal forest and a Frankish freemen’s hunting ground as far back as the seventh century) have done so carefully, conscientiously and intelligently, with an eye to the long term value of the forest.

And they haven’t needed a philosopher to tell them how or why. They just did.

There’s something intensely modern about a book like this, which sets out to explain to us exactly what didn’t need explaining to those past Franks, Hessians and Bavarians. It’s a similar bent of mind to the one which advances the inescapably incoherent argument that sixteen year olds should have voting rights because they have to live in the world being voted for. That is to say a bent of mind in thrall to Berry’s “abstracted and oblique political compassion”; one that brooks no allocation of responsibility beyond the individual: sixteen year olds have to be able to vote, because our Sacred and Inviolable Individual Rights mean that no one else can be expected to be to look out for them. Modern readers, MacAskill seems to believe, need to be lectured on thinking long term because no one—in his view—actually has any responsibility for anything, by default, and so we need browbeating into pretending like we do.

This is, of course, plainly absurd. If sixteen year olds need to vote in order to get their interests looked out for their parents have failed appallingly, both as parents and as participants in the human community. If “they’re going to live in the world being voted on” is sufficient justification for sixteen year olds to have the vote, how can we possibly argue against extending suffrage to fifteen year olds? Or ten year olds? Or five year olds? Or people-who-are-going-to-be-born-while-this-candidate-is-in-office? If MacAskill spent less time philosophising and more time doing things and having problems and actually taking responsibility in the world, I suggest, he might have realised this himself.

Beyond the foundational assumption that responsibility exists only to the extent that it can be foisted upon people, the philosophy expounded in What We Owe The Future relies for much of its force on a clumsy sort of moral equivocation typical of a certain kind of thinker. MacAskill asserts early in the book that people in the past “had less”, therefore were less happy. This forms the kernel of his argument throughout, suggesting that since we’re (by his reckoning) happier now than past people were, future people can be expected to be even happier than us. Indeed, he asks us to imagine “the very best moments in your life” in order to demonstrate that “we know life can be at least as good as it is then”: not only were people in the past miserable, people in the future are going to be deliriously happy—and get more deliriously happy over time. It is, presumably, pure coincidence (and nothing to do with, say, hedonic adaptation) that we live at a moment in time when most people trend toward a happiness equilibrium.

This despite some of the world’s most beautiful and joyous art and thought dating from the Renaissance, from Ancient Rome, or from Tang China. If MacAskill spent a little less time reading about AI and a little more with the last stanza of Dover Beach, I can’t help thinking...

Most strikingly, the whole book from concept to argumentation to conclusion rests on MacAskill’s terror of things ending. Humanity has been coming to terms with death for millennia. I struggle to interpret MacAskill’s fascination with the “deep future” as anything but a juvenile fear of mortality. Just as this whole book could be defused with a dose of some actual things worth worrying about, it could have been similarly obviated by a little maturity (which might, conveniently, have arisen from having some actual things worth worrying about).

MacAskill’s bizarre assumptions about how people approach responsibility, his presumption that wealth equates happiness therefore past people were inescapably miserable and future people will be infinitely more joyous than we are, and his infantile terror of the possibility of endings makes it impossible to take his book seriously and militates for his total inadequacy as the kind of person—thoughtless, unimaginative, unselfconscious, blandly vile—we want guiding our societies. At one point he sketches the terrible future awaiting us if humanity ignores his warnings: we might “remain in hunter-gatherer and farming societies for millions of years, until some natural disaster like an asteroid strike killed us all off”.

That humans have for thousands of years lived and continue to live full, satisfying lives in “hunter-gatherer and farming societies”—and that many of our glowing artistic, scientific, philosophical and moral achievements came out of precisely those “farming societies”—appears to pass him by. The easiest conclusion to draw is that MacAskill is unrelentingly miserable and purposeless in the modern world and believes everyone else must be too (his chapter on “wellbeing”, in which he lays out a good deal of global data showing that 90–95% of people alive today report being happy and then announces that he simply doesn’t believe it suggests this might be the case). How else could he justify the suggestion that without things getting infinitely better into a future of delirious euphoria humanity as a whole is effectively twiddling our collective thumbs and waiting for the asteroid?

Or—and this is a very real possibility—MacAskill isn’t all that miserable. He’s just a perfect embodiment of the reason why people dismiss philosophy: someone with all his concerns taken care of, with no great pressures or responsibilities in his life, with no demands from an outside culture fixated on individual freedom and therefore with nothing else to do but emerge now and then from navel-gazing to expound his sophistry, posture as a serious contributor to our civilisation, and compete for funding from Silicon Valley magnates fascinated with space travel and eternal life. At one point MacAskill reassures the reader that even if some disaster reduced the global population by 99% to eighty million people we needn’t worry because “it is extremely likely that enough survivors would have knowledge of agriculture” for us to get by. This self-appointed wayfinder of our future, it appears, is so responsibility- and things-other-than-“philosophy”-free that he thinks the “knowledge of agriculture” encompassed in ‘seeds grow into plants; food plants are the ones you want’ is some kind of specialist insight.

Counterpoints and refutations to MacAskill’s ideas, ideology and fear of ending have been articulated since at least the invention of writing. The oldest written narrative we have tells us as much—could almost be written in response to his trembling:
“Gilgamesh, where are you hurrying to? You will never find that eternal life for which you are looking. When the gods created man they allotted to him death. [But] fill your belly with good things, dance and be merry, feast and rejoice. Let your clothes be fresh… cherish the little child that holds your hand, and make your wife happy in your embrace, for this too is the lot of man.”

Homer, too, tells us “…like the generations of leaves, the lives of mortal men. Now the wind scatters the old leaves across the earth, now the living timber bursts with the new buds and spring comes round again… as one generation comes to life, another dies away.”

This, quite apart from the doctrinal presuppositions that anchor What We Owe The Future is by far the most damning element of the book. Even if his faulty premises were sound and his absurd conclusions unassailable, there is a profound unwisdom in the terror of mortality that underlies MacAskill’s work. Had he spent more time finding out about seeds growing into plants, or rolling over the swell on a surfboard while the sun goes down, or learning to play the guitar rather than hunched over this poor excuse for philosophy in an office somewhere, he might better understand—and better reconcile himself to—the possibility that things can end, but be beautiful all the same. He might have the maturity, not to say the sense, to understand that the augenblich isn’t going to verweile. That he was lucky to get it in the first place. To, as Doerr puts it “agree to live now, live as sweetly as I can, to fill my clothes with wind and my eyes with lights, but understand I’ll have to leave in the end”.

The silence is all there is, all the ages of humanity shout back at MacAskill and his childish nail-biting, it is the alpha and the omega. And everywhere we look, we see something holy. Go listen to the tui. Go learn the sounds of the sea in its infinite moods among the faulted rocks. Go stand in the cathedral of kauri boughs and feel the air vibrate with cicadasong. Feast and rejoice. Cherish the little child that holds your hand. Go see the Spessart, that people have loved and maintained for a thousand years without a single philosopher. Turn in the turning lights. Be true to one another. Pray without ceasing.

Come dust.
Profile Image for Ankita Shukla.
5 reviews4 followers
October 13, 2023
The book consciously delves into the concept of longtermism in varied ways. It begins by establishing a historical context, offering insights into the evolution of our trajectory through the lenses of learning and value lock-in systems. It meticulously pieces together parts in history, such as instances of brutality, the era of slavery, the industrial revolution, and the challenges posed by the ever-growing global population. Intertwined with these events is a touch of philosophical inquiry, adding depth to the narrative.

Moving forward, the book extrapolates current global trajectories, exploring themes like climate change, technological advancements, and broader aspects that shape the future of human civilization. It paints a comprehensive picture of where our world is heading, providing readers with a thought-provoking analysis.

In its concluding chapters, the book underscores the importance of individual actions in shaping a sustainable future. It passionately advocates for personal transformation and discusses various outlooks that society as a whole can embrace moving forward.

While there might be parts of the book that readers may not entirely agree with, it serves as a catalyst for introspection, encouraging readers to engage in deep philosophical contemplation.

Overall, the book gently persuades its readers towards a more conscious and sustainable future.
Profile Image for Brett.
168 reviews
April 26, 2023
In 2021, a news story broke that over one hundred laborers from Mexico and Central America had been held at a fenced-in labor camp in southern Georgia for years. The workers were barred from leaving the camp, held at gunpoint, and forced to pick onions barehanded for 20 cents a bucket. I bring this up because--for many news readers--it was the first record of modern U.S. slavery we've heard about. According to the International Labour Organization, over fifty million people worldwide are living in modern slavery.

I bring up these examples because William MacAskill spends multiple chapters talking about the worldwide abolition of slavery being one of the great achievements of our society, a showcase that humanity will pass down the correct values to our ancestors. He argues abolition would not have occurred due to economic reasons, but instead took place due to the the actions of "a remarkably small number of people", namely Great Britain shifting away from slavery in the 19th century some two decades before the United States did. But as MacAskill celebrates Britain as an example of societal change, he fails to mention the numerous colonies that Britain maintained for another century, looting 9 trillion dollars from India and maintaining an oppressive sugar trade in Jamaica.

This is one of my major problems with What We Owe the Future: MacAskill spends the entire book cherry picking examples that the Western world has "solved" and extends that thinking towards the future. He consistently fails to examine history as a whole and the modern inequalities that developing nations struggle with. He envisions a world where we might "extract all fossil fuels" and not leave enough for our ancestors (I was rolling my eyes whenever he brought this up, as a world where we have extracted all fossil fuels would be one where humans wouldn't be able to survive). MacAskill consistently falls into capitalism traps like economic degrowth and population stagnation being bad, while Earth Overshoot Day happens earlier each year. If you're a person who cares at all about the climate, this book will serve as white noise for you.

I had been apprehensive of this book's "longtermism" theory, as the only experience I had with the phrase was Silicon Valley freaks like the FTX collapse guy and the weird eugenics couple that keeps getting newstories. Unfortunately, actually reading this book showed why these types gravitate towards "longtermism" thinking: it's an easy excuse to keep passing down Western social norms while ignoring developing world inequalities for generations to come.

P.S. I almost forgot to talk about his lessons in the final chapter, which are quite literally: "take good actions", "build up options", and "learn more". I don't know who could possibly gain something from that besides a tech-bro who's excused himself from every social studies/ethics class in school via doctor's note because "this class doesn't help me get a job". An embarrassing, elementary school level conclusion for a book severely lacking in perspective.
Profile Image for Amber Lea.
741 reviews133 followers
March 4, 2023
This book is dumb.

The top review by Rick Wilson says "this book is more of a collection of vague notions, ideas and half baked philosophical musing" and I completely agree.

The author spends the first THREE chapters trying to convince you that you should care about future generations, which I imagine if you're picking up this book you already do. And his arguments for why you should care are really stupid. Or rather, he had one or two good arguments, and then he just keeps going until his examples get so convoluted you're like, "Why are you still talking? You really should have stopped several paragraphs ago." My biggest complaint was that instead of straightforwardly talking about issues he felt the need to get really metaphorical, often coming up with two metaphors to explain the same concept. I'm fine with the use of metaphor but it was serious overkill.

But once he stopped trying to convince you that longtermism is something you should care about he just rambled for the rest of the book. I don't even feel the need to address anything he said. This guy has zero credibility as far as I'm concerned. But I gave this two stars because I think we do owe the future something, and if this convinces people to care I guess it's not a total waste of paper.
Profile Image for Grace.
46 reviews
January 10, 2023
Painfully ignorant. It feels like religion for non religious people. It proposes you should care about people (you don’t know, in the future) as if that’s a new concept. Clearly written by a man for men, the ideas are not remotely revolutionary. You should not need a philosopher or religious leader to tell you to care about the future and it’s people.

Even if you do need someone to tell you, this guy ain’t it. The idea’s specifics are incredibly capitalist. How can a monetary system that exploits present and past people be a good thing for the future. Relying on a system that actively prevents people from having their needs meet (also because they don’t have enough of a completely made up currency) will never produce a better future.

This idea of caring for future people has literally been the goal for mothers everywhere. Especially those outside of colonialism who understand the importance of inter generational connections. I recommend reading books written by BIPOC caregivers
2 reviews10 followers
August 18, 2022
This may be the most thought-provoking and well-researched book I've ever read. Whether you're brand new to the ideas of longtermism or have been reading about it for years, you'll learn lots of new things.
Profile Image for Elizabeth.
47 reviews13 followers
August 24, 2022
He made some good and interesting points early on, but lost me completely at population ethics. That chapter was utter drivel and I almost hurled it across the room. In the end, I’m not convinced by the argument the more people the better.

Basically, live a good and happy life and it will have a positive effect on the future. Feel free to skip this book and do something more productive or pleasant with your time instead. Future generations may or may not thank you.
Profile Image for Otto Lehto.
457 reviews173 followers
January 13, 2023
Nonpathological people care about the future. But there is a difference between caring what one is going to have for dinner tonight, caring about one's retirement, caring about one's children's future, and, in the extreme, caring about future generations hundreds of years from now. How should we navigate, and choose between, these time horizons? How should thinking about the very long future - including human civilizations that are completely unknown to us - affect our individual and collective decision-making? Or should we simply discount the far away future altogether and focus on more immediate concerns? The intriguing, shocking, and "bullet-biting" conclusion of the "long-termist" ethical position (advocated, among others, by William MacAskill, Derek Parfit, and Tyler Cowen) is that we should massively care about future generations. Indeed, if this perspective holds, we should probably direct the majority of our social and economic resources to ensuring the survival and flourishing of humanity thousands of years into the future.

It goes to the credit of William MacAskill, as an academic popularizer, that he manages to popularize the ivory-tower philosophy of Derek Parfit (who really kickstarted this whole branch of long-termist ethics) and make it immediately relatable and plausible. But perhaps this is a skill that MacAskill picks up from his other main influence, the utilitarian philosopher Peter Singer, whose career has always managed to bridge the gap between the academia and the popular press. And MacAskill obviously has been doing public outreach for a while. His book Doing Good Better: How Effective Altruism Can Help You Make a Difference is already a modern classic in the field of charitable giving and the ethics of redistribution. The Effective Altruist (EA) movement, despite the bad press given to it by the Sam Bankman-Fried fiasco (which I will not tackle here), is not going anywhere. Its basic principle is sound and attractive, namely, that we should use cutting edge social science, and prudent pragmatic decision-making, to strive to maximize the amount of good we can do in the world, instead of relying on empty words, promises, and emotions.

But how does the "long-termist" project fare in comparison to the more modest claims of the original Effective Altruist movement? This can be separated into two questions: 1) Will ordinary people and politicians be sufficiently convinced by the long-termist philosophy to turn it into a concrete policy agenda? 2) Will philosophers, social scientists, and thought leaders be attracted to the idea? I believe that the latter course is much more plausible than the former. Influencing public opinion directly might be a tall order since long-termism is heavily counterintuitive to many people. But influencing the intellectual atmosphere through subterranean means, such as converting philosophers, artists, scientists, bureaucrats, and politicians to the cause, is much more plausible. This way, popular opinion may be indirectly shifted in the EA direction.

I salute MacAskill for bringing these issues to popular consciousness, and for providing an admirably clear summary of this branch of cutting-edge practical utilitarianism. The book successfully shows, I believe, that caring about the SURVIVAL of the human species (and of conscious life in general), should be a top priority for any ethical worldview that cares about human flourishing beyond the present moment. Even if we believe that future generations have significantly less value than the present generation (for whatever reason), we should probably ensure that life as we know it goes on - and not merely goes on, but goes on in an upward, flourishing trajectory. This means giving more attention to uncomfortable but timely questions such as the management of existential and catastrophic risks (e.g., nuclear holocaust, A.I. driven human extinction, asteroid impact, pandemics). This is the only way to fulfil our duty to ensure that consciousness and happiness have a future. MacAskill rightfully points out that, while having 99% of humanity killed would indeed be an epic tragedy, having 100% killed would be infinitely worse, since this would end our civilization. The real tragedy, in the full apocalypse, would be to prevent countless generations of future human flourishing from ever coming into being. This follows from the utilitarian recognition that happy sentience, now or later, is valuable.

Despite my praise for MacAskill as a writer and for the book's utilitarian underpinnings, I have two major disagreements (and worries) about the project. The first one is an economic one. It seems to me that MacAskill underemphasizes both the normative importance and the empirical possibility of everlasting economic, technological, and scientific growth/development (understood as the continuous generation of improvements, innovations, and adaptations). On the contrary, it seems to me that Tyler Cowen is right, in his Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals, that continuous socioeconomic growth - or rather, our social evolvability and innovation capacity - is the most plausible candidate for an institutional framework that acts as the main driver of humanity's long-term wellbeing. My second criticism is even more serious. Indeed, it seems to me that unless this criticism is taken seriously, the whole enterprise might collapse. There is an element of massive hubris, or control mania, in MacAskill's philosophy - and, indeed, in the Oxford-driven EA movement in general. It was only a century ago when the British Empire ruled the world. It is no coincidence that the hubristic arch-imperialist Cecil Rhodes has a scholarship named after him... at Oxford. And although it would be foolish to charge the EA movement as carrying the torch of British imperialism (since their imperialism is more of the cosmopolitan variety) there is still a strong element of elitist hubris, and technocratic fervour, in its universalistic and cocksure pronouncements. And although the youngest generation of EA scholars are humbler than the know-it-all philosopher kings of the past, they could benefit from integrating much more systemic humility, uncertainty, and democratic participation into their models of the world. This is an extremely important caveat since caring about the long-term might be used as an excuse for implementing radical policy change even against the democratic wishes of the population. It might become the latest justification for increased social control, "nudging," and "end-justifies-the-means" authoritarianism. As it stands, the new genre of "population ethics" could become a reprisal of progressive eugenics. To prevent this, long-termist scholars need to embrace some Humean skepticism and humility regardless of where they land on the spectrum of progressivism, liberalism, socialism, or conservatism.

Despite my concerns about the whole enterprise, I do believe that MacAskill is raising important - indeed vital - questions that need to be debated philosophically and democratically. Fusing the insights of Peter Singer, Derek Parfit, and contemporary social sciences is a valuable task in itself, even though the concrete details of MacAskill's proposal have many holes in them. Nothing about the present book should be taken as the God's gospel. It is best taken as much-needed "Viagra of the mind": something to invigorate our thinking, take our mind into new directions, and, ultimately, to reevaluate our norms and institutions in a new light. While the critics of the long-termist project are surely partially right that we cannot ever control or predict the future, or even be sure that our present ethical principles are up to the task of effectively caring about future generations, MacAskill's book is a useful reminder that we need a) to insure our institutions against civilizational ruin and b) to pay more attention to all those beautiful people, and forms of life, that yet remain unborn. This view of the world, if stripped of its hubris, can become equally useful and inspirational - and extremely valuable in both what it gets right and what it gets wrong.
Profile Image for Ryan.
1,048 reviews
October 5, 2022
There are many things that can make a book “good,” but I think my favourite quality in a book is its potentially transformative power. Transformative books challenge us and leave us thinking about them for a long time, ideally yielding real changes in how we live our lives or allocate our resources.

Transformative books do not need to be well written or engrossing, though they can be, and I note that many are not very highly regarded. Paul Krugman, for example, read Isaac Asimov’s Foundation books and it was one motivation for him to become an economist, but when I re-read those novels as an adult I rarely think of them as well written or even particularly readable. Steve Jobs read Stewart Brand’s Whole Earth Catalogue and he credits one of its aphorisms, “stay hungry, stay foolish” as guiding his business ethos. Neither Krugman nor Jobs was unique in their response to these texts. Many religious texts, meanwhile, have motivated powerful changes in their readers, though they have also often been controversial or banned. And I will note that while the Bible, for example, has many poetic passages, it also has many long sections that are neither fun nor interesting to read.

To what extent do we still produce these transformative books, and who writes them? Science fiction seems less transformative to me today than it was in the past, though it does seem to be written better and more palatably along a number of lines now. It would be tricky to name a magazine as interesting as the Whole Earth Catalogue was, and even the internet seems to struggle to produce this sort of content. (Perhaps we see a transformative quality in the internet’s politically radicalizing effect.) For a while, I found I could easily happen upon new and interesting ideas on reddit, but the front page now rarely produces anything I find transformative, let alone interesting, sorry. As for religious texts, they seem just as popular now as they’ve ever been, but MacAskill notes here that there are more religious people in the world simply because atheism is correlated with a low birthrate and religiosity is correlated with a higher birthrate. The texts don't seem to have a very powerful transformative power for non believers.

Is it possible that philosophers, led by William MacAskill and other Effective Altruists, have inherited the transformative writers/ thinkers legacy? In twenty years, will we see them mentioned in the interviews scholars and leaders give? These books (and online forums) motivate people to take giving pledges, to change their diet, and even to donate their kidneys to strangers. And this work, What We Owe the Future, was one of the most impressive books I’ve read in some time. The ideas are often mind blowing and the content is often fascinating even if separated from the larger argument.

This is a remarkable work.

*

In writing this post, I decided to create a Goodreads shelf called Transformative Reads.
69 reviews
June 19, 2023
If you have read Doing Good Better by William and The Precipice by Tony Orbe, the. You have read this book. William cites The Precipice in his book and follows the format almost exactly. He adds a little insight from his book Doing Good Better in explaining the need for long term thinking, which is well done. He also includes easy, practical and seemingly unrelated (until you read the book) ways in which we can actively participate in fighting against humanity’s demise. These include having children, helping people be happy, donating to poverty, limiting global warming with personal choices or better yet donating to the cause the philanthropic organizations that are leading the charge, being involved politically (vote), helping develop the right morals in society (ie stoping slavery, democracy, women’s equal rights etc), choosing your career wisely and more.

William also highlights the threat of stagnation as a real threat to stopping our progress and delves into the philosophical circuitous thought of what it means to have the “best world” with a lower number being super happy or a higher number of people being just happy. (That was a merry go round).

Another point that varies from the precipice is an important one, which is that of ideas becoming solidified and unchangeable. This is particularly threatening right now as we enter the world of AI. If morals/ideas are programmed into AI it is likely that they will become ideals that are semi-permanent and could be for centuries. That is scary. Likewise if we colonize the galaxy, the same would be true. While Williams point here is a good one, his conclusion of waiting u til we get it right is off. We will never get there, so we can’t wait. But we must work to build a world that allows for some plasticity in the future for new thought and ideas to germinate.


Profile Image for Eitan.
3 reviews
January 2, 2023
Really great book! MacAskill tackles some of the most important issues of our time - and presents a cogent argument for why the era in which we live may be critical to the future of humanity. The book moves from ambitious, grand analyses of humanity's progress over the millennia to esoteric ethical and philosophical discussions and concludes with practical advice for readers interested in helping humanity build a better world- and avoid a much worse one.

The weakest element of the book is that it sometimes gets lost in abstruse philosophical details (a flaw the author notes himself before embarking on his discussion of population ethics).

This book broadened my perspective on the history of civilization and technological progress and introduced me to new concepts, such as population ethics. What We Owe the Future is a very mind-opening read. Perhaps most importantly, it is the type of book that would make the world a better place if everyone was interested in reading it.

Among many other elements of the book, he discusses the growing risks to humanity from Artificial General Intelligence (AGI), engineered pandemics, and great power conflict and offers interesting perspective on how we, together, can protect humanity from these growing threats. His analysis balances clear-eyed assessments of these risks with real cause for optimism and practical advice for contributing to the long-term goals of protecting humanity from its own destruction and building a better future.
Profile Image for William Aicher.
Author 23 books326 followers
October 23, 2022
A bit more in the vein of "WHY we owe the future" than "what" ... but when looked at from this perspective, it makes some very compelling arguments.

Personally, I preferred Toby Ord's 'The Precipice' to this. They cover a lot of the same general topics, which makes sense since he and MacAskill are friends/colleagues. But my preference may simply be because I read 'The Precipice' first.

Regardless, I recommend everyone with any interest in ethics and the future of civilization read this book. Especially as he makes strong arguments that some of the conventional thinking of ways to make a meaningful difference may be erroneous.
Profile Image for Daniel Taylor.
Author 4 books86 followers
September 26, 2022
This book caught my eye in a bookshop. My mentor Dr John F. Demartini teaches students to think 1,000 years into the future. Brian Tracy and Jack Canfield teach that the longer the time perspective you can bring to your decisions, the better they will be. Until I found What We Owe the Future, I'd never considered taking a million-year view.

MacAskill is a clear writer who gives excellent explanations of the topics he explores. Those topics range from AI to war. He argues for readers to embrace longtermism, a philosophy that considers how the actions we take today will affect countless future generations.

I think high schools and universities should immediately place it on their required reading list. This is a book that will help you think through the issues we can expect to face in the future, and what you can do considering decisions you make.
Profile Image for Christopher Hudson Jr..
80 reviews23 followers
October 25, 2022
I came into this very sympathetic to longtermism and related views like EA and consequentialism. The book is good, but I was honestly expecting more. If you’re already convinced of about existential risks and that future people may matter, there isn’t much more substance. I think most people exposed to longtermism will be sympathetic to the weak version, but what’s contentious is how we weigh moral obligations to future people with other moral obligations. Again, the book is good and may be enjoyable to people new to these ideas, but I wish the entire book was as rich as the chapter on population ethics.
Profile Image for Maher Razouk.
718 reviews210 followers
September 13, 2023
تعتبر التغيرات التي تطرأ على القيم مهمة لأن لها تأثيرات كبيرة على حياة الناس والكائنات الأخرى. لكن من وجهة نظر بعيدة المدى، فهي كذلك ذات أهمية خاصة مقارنة بأنواع أخرى من التغييرات التي قد نجريها لأن آثارها يمكن التنبؤ بها على نحو غير عادي.

إذا كنت تروج لوسيلة معينة لتحقيق أهدافك، مثل سياسة معينة، فإنك تخاطر بأن السياسة قد لا تكون جيدة جدًا في تحقيق هدفك في المستقبل، خاصة إذا كان العالم في المستقبل مختلفًا تمامًا عن اليوم، مع بيئة سياسية وثقافية وتكنولوجية مختلفة تمامًا. وقد تخسر أيضًا المعرفة التي ستكتسبها في المستقبل، الأمر الذي قد يغير اعتقادك بأن هذه السياسة كانت فكرة جيدة.

في المقابل، إذا كان بإمكانك التأكد من أن الناس في المستقبل يتبنون هدفًا معينًا ، فيمكنك الوثوق بهم لاتباع أي استراتيجيات أكثر منطقية، في أي بيئة يتواجدون فيها ومع أي معلومات إضافية لديهم. لذلك، يمكنك أن تكون واثقًا إلى حد ما من أنك جعلت تحقيق هذا الهدف أكثر احتمالاً، حتى لو لم تكن لديك أي فكرة على الإطلاق عما سيكون عليه العالم في المستقبل.

توضح "مشكلة اليد الميتة"¹ في العمل الخيري أهمية تعزيز الأهداف بدلاً من الوسائل. غالبًا ما يحدد مسئولو المؤسسة الخيرية دستورًا يوجه السلوك المستقبلي لتلك المؤسسة الخيرية بطرق تصبح سخيفة بمرور الوقت. أحد الأمثلة على ذلك هو ScotsCare - "المؤسسة الخيرية للأسكتلنديين في لندن" - التي تكرس جهودها لتحسين حياة الاسكتلنديين في لندن. كان هذا الهدف بالتحديد منطقيًا في وقت تأسيس المؤسسة الخيرية في عام 1611. في ذلك الوقت، كانت اسكتلندا وإنجلترا قد خضعتا مؤخرًا لحكم نفس الملك؛ كان الاسكتلنديون في لندن مهاجرين، وكان بعضهم محرومين بشكل غير عادي وغير قادرين على تلقي الدعم من ��برشيتهم المحلية، وهو ما يعادل الضمان الاجتماعي في ذلك الوقت.

لكن هذا الهدف أصبح أقل منطقية بعد مرور أربعمائة عام. لندن هي المدينة الأكثر ثراء في المملكة المتحدة، وبقدر ما أستطيع أن أقول، لا يواجه الاسكتلنديون في الوقت الحاضر أي مشاكل معينة هناك. وفي المقابل، فإن العديد من المناطق داخل اسكتلندا تعاني حرمانًا أكبر بكثير. من المفترض أن لا يهتم المشرفين على المؤسسة الخيرية بالأسكتلنديين في لندن في حد ذاتها فقط؛ بل أن يهتموا بالاسكتلنديين بشكل عام. كان من الممكن أن يحققوا أهدافهم بشكل أفضل لو وجهوا المؤسسة الخيرية لمتابعة الهدف الذي اهتموا به بالأساس: "افعل كل ما من شأنه تحسين حياة الاسكتلنديين" - بدلاً من فرض طريقة خاصة جدًا للوصول إلى هذا الهدف.

ولهذه الأسباب، فإن تغيير القيم له أهمية كبيرة بشكل خاص من منظور طويل المدى. وبالنظر إلى الماضي، نرى أن مثل هذه التغييرات قد حدثت وكان لها تأثير هائل على حياة المليارات من الناس. وبالنظر إلى المستقبل، إذا تمكنا من تحسين القيم التي توجه سلوك الأجيال القادمة، فيمكننا أن نكون واثقين تمامًا من أنهم سيتخذون إجراءات أفضل، حتى لو كانوا يعيشون في عالم مختلف تمامًا عن عالمنا، وذو طبيعة لا نستطيع التنبؤ بها.


¹ (مبدأ اليد الميتة يعني التحكم في كيفية إدارة الممتلكات أو السلطة أو المشاريع بعد الوفاة ... المترجم)
.
William MacAskill
What We Owe The Future
Translated By #Maher_Razouk
Displaying 1 - 30 of 632 reviews

Can't find what you're looking for?

Get help and learn more about the design.