Jump to ratings and reviews
Rate this book

Moral Uncertainty

Rate this book
This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read at Oxford Scholarship Online and offered as a free PDF download from OUP and selected open access locations.

Very often we are uncertain about what we ought, morally, to do. We do not know how to weigh the interests of animals against humans, how strong our duties are to improve the lives of distant strangers, or how to think about the ethics of bringing new people into existence. But we still need to act. So how should we make decisions in the face of such uncertainty? 

Though economists and philosophers have extensively studied the issue of decision-making in the face of uncertainty about matters of fact, the question of decision-making given fundamental moral uncertainty has been neglected. In Moral Uncertainty , philosophers William MacAskill, Krister Bykvist, and Toby Ord try to fill this gap. They argue that there are distinctive norms that govern how one ought to make decisions and defend an information-sensitive account of how to make such decisions. They do so by developing an analogy between moral uncertainty and social choice, noting that different moral views provide different amounts of information regarding our reasons for action, and arguing that the correct account of decision-making under moral uncertainty must be sensitive to that. Moral Uncertainty also tackles the problem of how to make intertheoretic comparisons, and addresses the implications of their view for metaethics and practical ethics. 

238 pages, Hardcover

Published October 18, 2020

Loading interface...
Loading interface...

About the author

William MacAskill

8 books652 followers
I'm Will MacAskill, an Associate Professor in Philosophy at Lincoln College, Oxford, and author of Doing Good Better (Gotham Books, 2015). I've also cofounded two non-profits: 80,000 Hours, which provides research and advice on how you can best make a difference through your career, and Giving What We Can, which encourages people to commit to give at least 10% of their income to the most effective charities. These organisations helped to spark the effective altruism movement.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
27 (40%)
4 stars
20 (30%)
3 stars
15 (22%)
2 stars
4 (6%)
1 star
0 (0%)
Displaying 1 - 12 of 12 reviews
Profile Image for Fin Moorhouse.
74 reviews110 followers
September 18, 2020
Excellent, scrupulous dive into an exciting topic. Not a beach read, but probably hard to improve as both an overview to this nascent field, and a key contribution in its own right.

Often, we know all the relevant empirical facts, but we are left unsure about what (morally) we should do. This isn't a philosophical fancy: it should be familiar to basically anyone who cares about acting ethically, but takes seriously the fact that ethics is hard.

The book's first task is to explain why moral uncertainty poses a real and significant issue at all. Doesn't this question 'fetishise' rightness and wrongness as distinct from the things that make actions right or wrong? And don't we have some kind of regress on our hands as soon as we abstract away from the first-order theories? Headlines from this chapter: this is a real problem, those objections aren't so strong, and furthermore you should be morally uncertain to some degree. Look: the greatest moral philosophers of the last century were morally uncertain — what's your excuse?

The bulk of the book consists in figuring out how to decide under moral uncertainty given various levels of comparability between theories, and measurability within them. There's a nice table in the introduction laying out the possible 'informational situations', where a tick indicates that the book considers it:

Comparability-Conditions

When all the moral theories you're considering are intertheoretically comparable and interval or ratio-scale measurable (the ideal cases), we should maximise expected 'choiceworthiness' (the more choiceworthy an option, the stronger the reasons for doing it). This is just like how (on most views) we should maximise expected utility when we're faced with epistemic uncertainty.

Why not just act according to your favourite theory — the one you have most credence in? Suppose you're driving fast along a quiet road and approach a corner with a crossing the other side. In all likelihood, there's nobody around the corner and you can afford to speed on through. But there is a chance somebody will be crossing. That chance is large enough, the consequences of hitting someone terrible enough, and the cost of slowing down small enough, that it's obviously best to slow down — even when you think it's very likely slowing down will turn out to have been pointless. So it goes with moral uncertainty: as long as you have some credence in an ethical theory which tells you that taking this option would amount to an awful mistake, even if it's not your preferred theory, then you have a reason not to take this option.

One interesting objection here is that maximising expected choiceworthiness (MEC) would be way too demanding in practice. Maybe you've read Peter Singer's arguments that the relatively well-off have an obligation to donate a significant amount of money to effective charities. Even on a fairly low (≈ 10%) credence in his view, MEC would still recommend donating. On Singer's view, choosing not to donate £3000 or thereabouts is morally equivalent to standing by as a stranger drowns in front of you. Just like slowing down the car, you really better donate that money even if you think, on balance, that Singer is wrong. The authors are happy to bite the bullet here: where this reasoning is watertight (i.e. the only alternative is to spend the money on hot tubs and fancy watches), then conclusions like these really do drop out of taking moral uncertainty seriously.

Things get interesting when the theories we're uncertain about are less than fully structured. Some might only be able to rank options on an ordinal, rather than cardinal, scale. Others might retain some interval-scale measurability, but lose any direct comparability with other theories. What to do here?

In the case of merely ordinal theories, there's a fruitful analogy to be drawn with problems in social choice: given a bunch of preference orderings over options from a bunch of people, how do we pick the best option? This is a deceptively tricky question with no unequivocally good answers. After discussing some alternatives, the authors ultimately come down in favour of the 'Borda rule'.

What about when the theories we're considering are able to measure options on some cardinal scale, but there's no obvious way to compare between them? Suppose your credences are split 50-50 between ethical theories A and B, and you're choosing between options X and Y. A assigns 10 'A points' to X and 100 to Y; B assigns 50 'B points' to X and 5 to Y. Which option is best depends on the exchange rate between 'A points' and 'B points'. How do we figure this out? The solution put forward is called 'variance voting', which normalises each theory so that the variance (the average of the squared differences from the mean) come out the same across all theories. Again, this is a bit like actual voting; where voters get to express the strength of their preferences between options.

Once the authors have proposed some formal methods for comparing choiceworthiness across theories, they turn to a more general question. How often are theories in fact comparable? Aren't these solutions over-optimistic hacks trying to bridge an unbridgeable gap? Was this whole project really feasible after all? The authors (conveniently but convincingly) argue that intertheoretic comparisons really are possible here.

The last few chapters address the broader implications of the approach to moral uncertainty that's just been outlined. The first consequence is metaethical: taking moral uncertainty seriously seems to favour cognitivism: the view that moral judgements are beliefs that take truth conditions, rather than something like an expression of desire or approval.

Now we get to think about the practical implications of taking moral uncertainty seriously, and in these chapters it becomes really clear that, in addition to being intellectually interesting, this topic is often surprisingly decision-relevant. The first point the authors want to make is that incorporating moral uncertainty is a more complicated deal than the literature wants to assume. Some have argued that maximising expected choiceworthiness leads to such obvious recommendations in some cases (e.g. in deciding whether to eat meat) that first-order theorising becomes all but irrelevant. To these people: don't be so certain!

The final chapter asks about the value, on the foregoing framework, of 'moral information'. Note that we do and should value information which helps resolve more familiar kinds of uncertainty. Suppose you're buying a second-hand car, but you're unsure whether it's a dud, a lemon. Presumably there should be some amount of money up to which you would be willing to pay to find out. Well, suppose you're thinking about giving your time (career) or resources (donations) to some cause, but you're uncertain about which cause is morally best. How much should you be willing to pay, in time or money, to resolve that uncertainty? A lot, according to these authors! In one plausible example, we see that a philanthropic organisation should be willing to pay $2.7 million of their $10 million pot for reliable moral information which determines how to spend the remaining $7.3 million (!). In one (highly idealised but amusingly counterintuitive) example, somebody considering how to spend a 40 year career might reasonably be willing to spend anything up to 16 years of her life studying ethics in order to figure out how to spend the remaining 24! Bottom line: moral information, even when it's less than fully reliable, can be hugely valuable.

It's worth noting that a lot of this book is fairly technical. Not offputtingly so, but it would probably help to have some familiarity with moral philosophy and maybe a bit of decision theory. I recently read Peterson's An Introduction to Decision Theory, which came in handy reading this — maybe worth checking out.

Overall, you get the impression that MacAskill, Ord, and Bykvist have spend a huge amount of time deliberating over objections and objections to objections; and have followed any number of promising alternatives before reaching the solutions they put forward. And clearly this is a book many years in the making — for instance, MacAskill's BPhil thesis about moral uncertainty was published in 2000! I also appreciated how many questions were left open or only partially resolved: you get a sense that this is a young and exciting field for research. The case has been opened, but it won't be shut for a while. Recommended!

You can access the free PDF version of this book here.
Profile Image for Artūrs Kaņepājs.
51 reviews8 followers
October 13, 2020
I could split my experience with the book in 3 parts:
Chapters 1 to 4 - enjoyed a lot, b/c of my own academic background was interesting to see that voting theory and statistical concepts can be applied like this.
Chapters 5 to 7 seemed increasingly technical, but widened perspective a lot.
Chapters 8 to 9 were a treat everyone should read.

Ethics is hard, smart people disagree on ethics, and we tend to be overconfident - these are all reasons against being too certain. But I'm not entirely convinced by the analogy with empirical uncertainty, because the moral values of outcomes are not observable, like, say in sciences or finance. I.e. it's not as straightforward to use empirical observations or non-arbitrage conditions to reject one theory in favor of another. Also, a more technical point, it was not clear to me whether much thought was given to the choice of variance over mean absolute deviation, which is a more intuitive and more robust measure (e.g. MAD may exist when variance does not exist for some distributions of outcomes).

Even so, a key takeaway, as so wonderfully spelled out in the book, is: society and most individuals under-invest in deciding what matters by orders of magnitude. I hope and expect this book will help to correct it.
Profile Image for James.
100 reviews
March 25, 2022
This book considers some really important and interesting ideas. It's remarkable that this concept doesn't have more literature on it - although to be fair, it is a seriously messy can of worms that I can totally understand being reluctant to open. Maybe I'll reread this later, since my current formulation of moral anti-realism basically says that moral realism fundamentally doesn't make sense, which makes it a bit hard to put on a moral realist hat and stay reasonable and consistent enough to do sophisticated thinking with the hat on.

Notes:
• When unsure how to do good, there are two fundamental sources of uncertainty. The first is uncertainty about the best method to maximize a given utility function is familiar - that's the domain of effective altruism. The second source is uncertainty about which utility function we should be maximizing in the first place
○ Assumes the existence of a One True Right, and that it's knowable
○ Does the consequence of anti-realism in these calculations go to zero?
• It doesn't look like we're gonna solve morality any time soon, any more than we're going to solve optimization. So we need to know how to make decisions under moral uncertainty
• Regress objection: uncertainty about how to deal with uncertainty, ad infinitum
○ Objection clearly vacuous, would have to invalidate decision-making under conventional empirical uncertainty too
○ Very similar to brain-in-a-jar pseudo-paradox
○ Regress no more invalidates 1st order uncertainty than it invalidates 0th order (expected value within a given theory). Regardless of how we're supposed to aggregate at higher levels (if at all), we still need theories to aggregate from
• Do moral beliefs exculpate? Can we blame someone for doing what they think is right, even if they're wrong?
○ Authors say not always. Hitler example. Authors say Hitler did not take in all available information and reach a responsible conclusion, which makes him still blameworthy
○ This seems dubious. Is anything short of a perfect intelligence blameworthy then? No one can perfectly integrate and account for moral uncertainty. It seems unfair to blame people for insufficient knowledge of obscure corners of metaethics. There doesn't seem to be a sensible definition of a reasonable amount of applied effort either, especially for consequentialists.
○ Ockham's razor - why should moral uncertainty be treated different than empirical uncertainty?
• Most of these objections seem pretty immaterial, but the authors' responses seem kind of insubstantial too for some reason. Is something weird with my head here, or is it actually the text?
• Apparently people object to maximizing expected choiceworthiness on the basis that it's too demanding? Since when does morality care about how demanding it is?
• Problems of ordinality and inter-theoretic comparison are important because they apply when credence in such theories is nonzero - aka, always.
• Moral uncertainty precisely analogous to social choice for a population with all possible moral theories represented in different proportions
○ Voting theory is just social choice theory where all preferences are merely ordinal
○ Moral uncertainty seems like it may be the same problem as a fully general theory of social choice, for all conceivable individual preferences
• Voting theory is so non-intuitive. Apparently all Condorcet expansions violate update consistency?
○ Only scoring rules, which produce some ordinal measure, can be update-consistent. Neat!
• Authors agree moral indifference has no effect on decisions
• Similar uncertainty arguments can be made for vegetarianism (why risk a grave wrong for slight personal gain) and abortion (why risk a grave wrong to avert a moderate personal cost)
○ This where a coherent way of aggregating expectations would be important. What's the ratio between certitude that killing animals is wrong vs killing fetuses is wrong, and how does it compare to the ratio of the costs? The arguments are in the same family structure-wise, sure, but that doesn't necessarily mean they're both true or both false...
• Moral uncertainty really smooths over a lot of the jarring stuff about picking a specific set of moral axioms, doesn't it? It makes an actor that's mostly confident in strict hedonistic utilitarianism still assign some credence and hence some expected value to justice, beauty, family, etc.
• Moral uncertainty arguments are almost never straightforward enough to be useful. Very few examples where all credible viewpoints fall on the same side.
○ In full propriety, we never have 0 credence, so we should really be integrating across all possible moral theories.
○ Perfect bayesianism like this is obviously intractable tho, in moral uncertainty as in empirical.
• Moral information value is way under-considered, just like moral uncertainty in general. Big Long Reflection hours
• Infinitely stacked integrals over uncertainty are everywhere. There are infinite possibilities to calculate expected value over, infinite ways to do that calculation, "and infinite ways to do that calculation"^{\infty}, for both epistemic and moral uncertainty.
○ How do you evaluate an infinitely deep stack of integrals? How would a meta-integral like that even work -how do we prevent our uncertainty of the method from breaking the whole thing?
3 reviews5 followers
January 6, 2020
Great overview of the state of the art in the research field moral uncertainty and many interesting new ideas.
36 reviews
November 30, 2021
One of the things that puts a lot of folks off of utilitarianism is that it asks quite a lot of folks and sometimes seems to imagine its adherents can have the time to actually work out all the implications of their actions (something that seems at best implausible). One of the attractions, for me, of utilitarianism as a moral philosophy however is that I believe it forces us to be quite humble in our moral proclamations and cautious in our actions. It’s very difficult to know, for sure, if your preferred political proclivities will lead to the most good in the future so I think it asks us to be more understanding and curious in the face of our epistemic rivals. To the extent that I try to pursue a vegetarian diet it was almost completely by this sort of logic; I’m not certain that animals have moral weight—or at least moral weight that can scale up to something I should worry about—but it seemed very plausible to me that they might. Moreover, when thinking about what sort of faults our decedents will find in our behavior many centuries from now, the conditions of factory farming seem like a front runner for the sort of thing thing we’d be harshly judged for.

This book was quite attractive as it offered a book length treatment on just that sort of logic from folks who have thought a lot about practical ethics; Ord and MacAskill have a good claim to be the founders of the effective altruism (EA) movement. I’ve read books by each of them and thought they were compelling and so I was excited to see what they had to say here.

The most obvious difference between the books I’d read by them in the past and this one was the density, I wouldn’t necessarily recommend this book to most folks, it’s quite thick with technical philosophical jargon that, even for this pretty avid philosophical hobbyist, was pretty far beyond me at times. However for the most part, when skimming the Stanford Encyclopedia of Philosophy didn’t get me what I needed, I happily noted it down with the hopes someone from the book club would know what on earth they were saying and moved on.

What if morality’s a math problem
I think the central conceit of effective altruism is that morality can ultimately be broken down to a math problem. This is met with abject horror in some circles and often simply dismissal but I’m pretty on board so I was excited to see what they had to say here.

The naïve approach to moral uncertainty is simply to posit that you might have different credences you grant to different moral intuitions you have. An example they use is as follows:

Jane is at dinner, and she can either choose the foie gras, or the vegetarian risotto. Jane would find either meal equally enjoyable, so she has no prudential reason for preferring one over the other. Let’s suppose that Jane finds most plausible the view that animal welfare is not of moral value so there is no moral reason for choosing one meal over another. But she also finds plausible the view that animal welfare is of moral value, according to which the risotto is the more choiceworthy option.


Here Jane isn’t really sure if animal lives matter but, since she does ascribe some plausibility to animal welfare being important, and would find either meal equally enjoyable, she may as well choose the veggie option. This result is pretty obvious since it has a “may as well” quality to it where Jane doesn’t really face any tradeoffs in her decision. The authors of course go on to complicate matters.

Full review at: https://timhannifin.substack.com/p/bo...
Profile Image for Vidur Kapur.
131 reviews49 followers
September 24, 2022
An excellent examination of moral uncertainty. The basic idea has been around for quite a while (Bostrom discussed the idea of a moral parliament back in 2006, for instance), but it’s clear that interest in the area has exploded in the last decade or so, and MacAskill, Bykvist and Ord do a great job of introducing, analysing and evaluating much of this recent work.

The authors first show, quite plausibly, that moral uncertainty is a valid concept, using some simple but effective examples.

They then go on to develop frameworks for how to make decisions under moral uncertainty, depending on whether the moral views in question are interval-scale measurable and comparable (in which case they endorse maximising expected choiceworthiness), ordinal and non-comparable (the Borda Rule is endorsed here, drawing on insights from voting theory), or interval-scale measurable but non-comparable (in which case variance voting is endorsed). They attempt to combine these views to give a general account of moral uncertainty, which effectively brings back the idea of maximising expected choiceworthiness (but this time, over all moral theories).

Later in the book, they discuss some of the implications of taking moral uncertainty seriously (both meta-ethical, and practical). The section on meta-ethics consists of an assault on non-cognitivism, the main variety of moral anti-realism. It is a tour de force, and should make us less credent in that view and more credent in moral realism.

Finally, in the chapter on practical ethics, they caution that moral uncertainty has the potential to be applied naively and carelessly and that, far from creating less work for moral philosophers, it’s likely to do the opposite. Things can get very messy, very quickly, when one considers how different moral views interact under conditions of uncertainty. It’s quite telling that, when considering these messy examples, the solution that screams out is to simply apply good old classical utilitarianism.

Ultimately, because of thorny practical issues like interaction effects and the choice of intertheoretical comparison, how we apply moral uncertainty to decision-making, they argue, will still depend in large part on the credences we place on different moral theories. I think this is correct, and people often carelessly apply moral uncertainty by saying that they, for instance, place “some” weight on things other than pleasure, like justice or truth, without specifying their credences.

For my part, I continue to see little reason to care about anything other than pleasure or suffering (nihilism or error theory seems to be the other most plausible view); the intuitions that underpin so-called common-sense morality are often arbitrary, contradictory and highly vulnerable to evolutionary debunking arguments, and deontological and virtue-ethical views seem to have been invented out of thin air. The fact that lots of people appear to “believe” in these views (though it’s doubtful that many of them do in a realist sense, in which case their views shouldn’t even be factored in) is irrelevant; a large majority of people on the planet believe in deities, but the credence I have in deities existing is very low.
Profile Image for Robin.
5 reviews
April 21, 2024
Etikk er vanskelig. Det ser man tydelig når selv smarte folk folk som har viet hele livet sitt til å studere etikk, fortsatt ender opp med å være sterkt uenige om selv de grunnleggende prinsippene for hva som er rett og galt. Gitt slik uenighet vil det være overmodig å tro at akkurat en selv har funnet den endelige sannheten; man bør være åpen for at man kan ta feil. Men hvordan bør vi da handle i lys av denne moralske usikkerheten? Denne boken prøver å gi et svar på dette spørsmålet.

Forfatterne argumenterer for at man bør behandle moralsk usikkerhet på samme måte som empirisk usikkerhet. Under empirisk usikkerhet er standardmodellen for rasjonelle valg å maksimere forventet nytte, det vil si at man bør velge det alternativet som gir størst forventet nytteverdi. Tilsvarende for moralsk usikkerhet er å maksimere forventet valgverdighet (choiceworthiness), hvor valgverdighet er styrken av moralske begrunnelser for å velge et alternativ. Det essensielle med disse tilnærmingene til usikkerhet er at all tilgjengelig informasjon tas med i betraktningen, inkludert informasjon fra empiriske eller moralske virkeligheter som anses som mindre sannsynlige.

Et eksempel på et valg under empirisk usikkerhet:
Julia vurderer om hun skal kjøre fort rundt en uoversiktlig sving. Hun tror det er ganske usannsynlig at det er noen som krysser veien akkurat rundt svingen, men hun er ikke sikker. Hvis hun kjører fort og treffer noen, vil hun definitivt skade dem alvorlig. Hvis hun kjører sakte, vil hun helt sikkert ikke skade noen, men hun vil komme litt senere på jobb enn hun ville ha gjort hvis hun hadde kjørt fort.

I dette tilfellet virker det åpenbart at det riktige valget er å kjøre sakte rundt svingen. Selv om man er svært sikker på at det ikke er noen som krysser veien på den andre siden av svingen, er konsekvensene av å ta feil så store at det ikke er verdt risikoen.

Et eksempel på et valg under moralsk usikkerhet:
Harry vurderer om han skal spise kjøtt eller et vegetarisk alternativ til middag. Han tror det er ganske usannsynlig at dyr har moralsk betydning, men han er ikke sikker. Hvis han spiser kjøtt og dyr faktisk har moralsk betydning, da begår han en alvorlig feil. Hvis han spiser det vegetariske alternativet, vil han helt sikkert ikke begå en alvorlig feil, selv om han vil nyte måltidet mindre enn han ville gjort hvis han hadde spist kjøtt.

Omstendighetene for beslutningen her er analoge med de i det forrige eksempelet. Selv om Harry er svært sikker på at dyr ikke har moralsk betydning, skaper hans tvil en betydelig risiko for å gjøre noe alvorlig galt, noe som oppveier sannsynligheten for å gå glipp av en mild fordel. Hvis vi mener at Julia ikke burde kjøre fort i det forrige eksempelet, så burde vi her mene at det vegetariske måltidet er det passende valget for Harry.

Om vi anerkjenner moralsk usikkerhet kan det altså ha drastiske konsekvenser for hvordan vi bør handle. Personlig har dette konseptet utgjort en betydelig oppdatering for hvordan jeg selv tenker rundt etiske problemstillinger. Jeg har alltid vært ganske solgt på utilitarisme, men nå prøver jeg også å ta andre syn med i betraktning.
12 reviews3 followers
March 15, 2021
A great review of the literature for aggregating over many conflicting moral theories which proposes ideas of its own with a focus on tractability and practical use. Helpful for moving beyond a naïve "my favourite theory" strategy in which you pick a moral theory you give the most credence as capturing what you value and act according to it, or being unreflective, to reasoning under a position of uncertainty about what you value.

It identifies the problems encountered in this process, like establishing a basis of comparability between theories, small probabilities in theories which identify high magnitude things to value, and discusses various approaches to handling them, so you can decide what assumptions you want to make and be aware of their weaknesses and strengths.

I came out of reading this book wanting to construct a model of value in various actions, and feeling equip to know what assumptions I was making and their consequences in doing so.

I would also note this book works for non-moral-objectivists; there's a chapter discussing the philosophical problems with considering morality both uncertain and not-objectivist and while they conclude that there's much to be done to build a satisfactory account of that position, it doesn't stop the discussed methods for aggregating over uncertainty from working if like me that is in fact the position you find yourself in.
Profile Image for Joshua Stein.
213 reviews154 followers
October 2, 2023
MacAskill's more technical work on moral uncertainty is worth reading, for those who are interested. I think the issue of moral uncertainty is an increasingly pressing problem that has ramifications across ethics, politics, and law; MacAskill's contribution here is worthwhile within that technical literature. If you're not in it for the technical issues, then it's probably not for you, but if you are, it's worth a look.

You don't really need to read the whole thing cover-to-cover, in my opinion. If you're already familiar with the background theory, I'd recommend focusing on chapters 2 and 5-8. Chapter 8 is especially important for understanding the broader context of MacAskill and Ord. If (like me) you're interested in how they understand social choice, then you probably should read the whole book, as chapters 3 and 4 lay out how things fit into a broader social choice framework (though... I find those moves less compelling than other parts of the book).
Profile Image for Jacob Williams.
512 reviews11 followers
June 12, 2021
"Every generation in the past has committed tremendous moral wrongs on the basis of false moral views. ... Given this dismal track record, it would be extremely surprising if we were the first generation in human history to have even broadly the correct moral worldview."

Usually, being 100% confident of something is a sign of either ignorance or stupidity. Especially on issues that people have failed to develop a consensus on despite millennia of debate - which unfortunately includes everything about the foundations of morality. Sometimes, I try to hedge my positions on moral questions to account for the risk of being wrong about fundamental principles, but not consistently or in a principled way. This book left me feeling that I should, but that even deciding on a good way to do so is complicated and will require a lot of followup thought and reading.
654 reviews
July 3, 2022
Extremely important topic, though realistically not of much importance for the large majority of people. While I skimmed the more technical chapters, I found chapters 1, 8, and 9 to be particularly interesting and helpful (on why we should take moral uncertainty seriously, practical implications of the authors' proposed approach to moral uncertainty, and the surprisingly high value of obtaining new moral information).
Profile Image for Quinn Dougherty.
56 reviews9 followers
September 17, 2020
Really glad I read it. The standout parts were the application of voting theory and the moral information chapter. Also cool to be the second ever reviewer on goodreads.
Displaying 1 - 12 of 12 reviews

Can't find what you're looking for?

Get help and learn more about the design.