Thread
It's December, so I suppose it's time for @threadapalooza again.

This threadapalooza I will be doing a thread of thoughts on ethics.
1. I read a moderate amount of philosophy of ethics, and my considered opinion is that the "big three" (utilitarianism, deontology, virtue ethics) are all bad. All of them other than virtue ethics are *very* bad.
2. The reason Virtue Ethics is better than the others is that it at least admits to the complexity of the problem. Ethics is among the most complicated and poorly understood domains we have, and even far simpler and clearer domains cannot be reduced to simple decision rules.
3. Virtue ethics at least starts from the premise "What if we treat ethical behaviour as if it were like any other skilled practice, and look at how people learn and develop such practices?", which is the only starting point that can possibly work.
4. I think virtue ethics falls down in identifying that skilled practice (telos is fake and eudaimonia is just a thing you made up to bundle a bunch of stuff you thought was good together, sorry Aristotle), and also is often weak in its understanding of how skill develops.
5. In general I think one of the greatest weaknesses of philosophy of ethics is that it usually starts from very abstract positions, and very basic understanding of how humans are. You can't really separate ethics from practical reasoning, social behaviour, etc.
6. You also can't separate your theory of ethics from your theory of emotions, and of personal change and development. A lot of ethics writing (philosophical and otherwise) assumes you can just tell people to do the right thing and they will. That's not how any of this works.
7. So, with that ground cleared, here are some of my basic thoughts on ethics:

a) Ethical pluralism is empirically correct, with a bunch of caveats. People operate on different explicit and implicit assumptions about what's ethical, and you have no way to change this.
8. I often refer to this as relativism-in-practice - it's possible that ~everybody is wrong and moral realism is true, but practical engagement with ethics and the world requires a certain amount of acceptance of ethical theories wildly different than your own.
9. b) This doesn't mean that every ethical theory is equally good. Some ethical theories make their adherents miserable. Some ethical theories fail to achieve their own goals. Some ethical theories are too complicated to follow.
10. But in turn that doesn't mean that for any two ethical theories, one is better than the other - some will be better at achieving one sort of aim, some another, based on different trade offs and things they prioritise.
11. The point is that ethical theories are practical social technologies that can be judged on their merits as such, and on how they interact with empirical facts about the world and humanity.
12. c) This also doesn't mean that if someone's ethical theory is perfectly self-consistent and effective you have to accept it. Someone can have a perfectly good on their own terms ethical justification that they should murder you. You should fight them.
13. One of the consequences of pluralism is that sometimes you're just actually in conflict with other people. You can try to defuse that conflict, but this isn't always going to work. You can try to persuade them of your view, but that usually won't work.
14. Sometimes you just have to fight people. This is (ideally) not much fun, but it can be worth doing. Sometimes you can just agree to disagree, or compartmentalise the disagreement, but that only works until you have to share resources. Then you have to fight or compromise.
15. Anyway, those are some of my foundational assumptions. The questions that feel very live for me are:

1) What do ethical theories do for us?
2) How can we build and internalise better ones?
3) What does a healthy pluralism look like?

I'll talk about those later.
16. One of the key features of your ethical theories is that they structure your emotions (credit to Thomas F. Green for highlighting this for me). A lot of our most important emotions have a strong ethical component to them - guilt and shame, but also pride and satisfaction.
17. These moral emotions are basically how you know what sort of person you want to be. You feel guilt when your actions don't live up to your standards, you feel pride when you achieve something you endorse as meaningful.
18. I think the positive moral emotions don't get enough attention and a lot of what people talk about as the "meaning crisis" really means a lack of positive moral emotions around how you spend your life. If you can only feel bad about your actions, not good, that sucks.
19. Green's argument is that we learn these emotions through a process of "normation" - internalising the standards of those around us, especially those who we regard as moral authorities (family, but also teachers, mentors at work, religious figures, etc)
20. Importantly in Green's view, this results in a plurality of "voices of conscience" - you learn different ethical theories from your boss at work than you do from your parents than you do from your peers. These are often in tension. Ethical plurality even within one person.
21. I think learning moral emotions from communities is a key part of it certainly. I'm not sure it's all of it (and I'm not sure Green means it to be all of it). I think there's a lot of scope for individual refinement, especially via reconciling those tensions.
22. I think internalising bad or contradictory ethical theories from others is the easiest way to end up in a state of what I call "moral disorder", which is when your moral emotions actively work against you in a way that doesn't even succeed on their own terms.
23. The easiest (and most common) example of this is when people are paralysed by guilt - either because they experience it too strongly, or because they have too many conflicting impulses.
24. I think helping people become less morally disordered is a key missing piece of what we need, combining a mix of therapy and ethical theory. I think the therapists lack the ethical theory to do this, and the philosophers lack the therapy skills to.
25. I'm still figuring out exactly what works for this, but some combination of internal family systems / coherence therapy plus actually figuring out what sort of person you can and want to be is pretty good at smoothing out the edges of the problem.
26. A note on how I'm using "ethical theory" - I mean this in a fairly broad sense, to include both things like e.g. utilitarianism, or deontology, or specific instances of this, and also the "implicit ethical theories" we carry around with us as individuals.
27. Think of this as something like a skill - there's an individual instantiation of it, a broad community of practice, a discourse, etc. that are all largely covered under the same heading.

It's also similar to e.g. a genre or a shared taste.
28. Roughly by "ethical theory" I mean any way of judging things as good or bad (or more nuanced evaluative terms) plus a way of relating to those judgements. This can be purely implicit (what you feel guilt, pride, disgust, respect, etc. about) or discursive (explicit rules etc)
29. You might reasonably object that the things I'm calling an "ethical theory" are neither theoretical nor about ethics and yeah fair enough, but terms of art are weird like that, and it's the best one I've got.
30. Thomas Green talks a lot about the role of "conscience" - which he defines as "reflexive judgement about things that matter" - and our disparate "voices of conscience" that we learn from our different contexts and communities.
31. These voices of conscience are one example of the sort of thing I'm talking about as "implicit ethical theories", but I think we've also got at least two other types of implicit ethical theories:

a) Our actions implicitly model what we consider good behaviour.
32.

b) What we reward or punish in others also defines a class of good behaviour.

We also have more *explicit* ethical theories, which are how we talk about ethics - what's good, what's bad, etc. - that overlap with all of these implicit theories.
33. I don't think it's possible or even desirable to have all of these different theories be perfectly coherent with each other, but I do think that when you notice a significant inconsistency between them that's often a sign of moral disorder that would be helpful to repair.
34. Often the bottleneck preventing you from doing this is an excess of guilt. If noticing that your actions don't live up to your standards is intensely painful, you'll get very good at not noticing that, or at justifying how no this is fine actually.
35. As a result one of the most important ethical skills is the ability to feel a bit guilty about something and use that to guide you to do better in future. People fail in this by having either no guilt or overwhelming guilt, neither of which are helpfully action guiding.
36. I think there's a feedback loop encouraging this - we expect people to feel this way, so our ethical reassurance reinforces this binary. You're much more likely to tell a friend they did nothing wrong than that they did something wrong but are miscalibrated on how wrong.
37. Similarly when we dislike someone or they've wronged someone we like, we tend to dial up the judgement to maximum and paint them as irredeemable, literally the worst, etc.
38. The result is a public ethical discourse of heroes and villains entirely unsuited to the reality of a bunch of flawed humans mostly trying to do their best but often failing
39. That's not to say there are no heroes or no villains, but most of us do not inhabit those extremes. Even the heroes and villains don't really - most heroes have flaws, most villains have redeeming characteristics - but the vast majority of us are a messy mix of good and bad.
40. And it's important to acknowledge this, and have an ethical discourse that acknowledges it, because without facing the bad bits we can't do better, and without acknowledging and embracing the good bits we won't want to.
41. I think a good starting point for repairing this is that, when talking to trusted friends, we should be more prepared to confront them with their failings, and simultaneously encourage them to have perspective on those failings.
42. e.g. a friend was recently feeling overwhelmingly guilty because they had let a problem get much worse than it needed to before addressing it. My response was to tell them that yes, they had fucked up, yes this was their fault, and also seriously cut yourself some slack.
43. I think if you strike the right tone this is actually often more reassuring than "you've done nothing wrong", because they *know* they've done something wrong, so this just sounds like a lie. "You've done something wrong but are being too harsh" is more believable.
44. Similarly, I've given work advice before to people mentoring - when the people you're mentoring are beating themselves up over real failings, "You're fine" isn't reassuring. "Yes this is a problem, but it's not that bad, I'll help you fix it" is.
45. I think almost everyone needs more conversations like this in their lives. Casper Ter Kuile's book "the power of ritual" talks about getting together with friends in "confession groups" to basically talk about your day to day moral experience, and this sounds great to me.
46. (I've not tried this for a bunch of practical reasons, but it's on my todo list).

One interesting difference between this and actual Christian confession is that it lacks the power of forgiveness. You know your peers accept you, but they don't necessarily have authority.
47. Having an authority who can do forgiveness and redemption seems a really powerful ethical technology that we mostly lack these days. Nominally this authority is god, but really it's your local priest and community (being delegated to by god, if you prefer).
48. But we've literally outgrown this sort of centralised source of forgiveness - there are too many of us, and we are too different - and we've not really found a workable substitute for it yet.
49. One of the reasons I'm particularly keen on it being possible to do bad things without being overwhelmed by guilt is that if you can't do this, you cannot make moral progress, because if you refine your ethical theories you will necessarily realise you previously acted badly.
50. If the knowledge that you have acted badly in the past comes with intense guilt, this means that every time you become a better person you will be wracked with guilt. This is extremely counterproductive!
51. Another reason is that I think there's an intrinsic trade off in moral failings. You can fail in two ways: You can do bad things that you could have avoided, or you can not do good things that you could have done.
52. It's important to try to avoid both sorts of failing, but there will always be some of each, and trying to minimise one tends to increase the other - if you try to do good things, sometimes you'll fail in culpable ways, if you try to avoid bad things, you'll be overcautious.
53. A tendency to experience a lot of guilt will usually push you towards not doing bad things rather than drive you to do good things, and the result is a very timid ethical engagement with the world, more focused on not being blameworthy than on doing good.
54. One of the things that I think virtue ethics does well and many other theories fail on is realising that there are no one off ethical decisions. Every action you take builds on what came before and informs what comes after.
55. Ethics is not just about what you do, but about who you should be.

You cannot wall off the former from the latter, because everything you do is training for what sort of person you are.
56. A concept that is not intrinsically virtue ethical but that I mostly see in virtue ethics contexts and fits naturally there is the idea of a "moral remainder", which is that sometimes you have to do the least bad thing and feel guilty about that. People dislike this idea.
57. Let's take the trolley problem. Regardless of which option you choose, there is a moral remainder. Pull the lever, you killed someone, you should feel guilty. Don't, you let five people die, you should feel guilty.
58. Why should you feel guilty?

Well, first of all, because you do not want to be the sort of person who lets people die or kills people without guilt. This is not a good path to be on.
59. But secondly, because trolley problems do not happen in a vacuum. Something lead you to this point and this lever. It may not be your personal choices, but even if it's in the world, you can still impact this. Guilt is how you know that your future actions must be better.
60. And that guilt is not there to stop you pulling the lever, or to make you pull it, it's there to drive you to find that bastard philosopher who keeps strapping innocent victims to rails and make them pay.
61. Some people's instinctive reaction to this is that it's unfair that they as an innocent bystander can be made to feel guilty. It is! It's also unfair to be nonconsensually strapped to rails. Bad things happen, and our responses to them are there to make them happen less
62. Another type of example of moral remainders is when your decisions have lead you to a point with no good options. E.g. you made a promise, and now you have to break that promise to do something higher priority (save a life, help a friend in crisis, etc).
63. Some people will say that you should keep your promises no matter what.

Those people are idiots who haven't thought this through, but they're not a million miles off.
64. You do not want to be the sort it person who breaks promises casually, but you do sometimes need to break them - you'll make mistakes, or just experience the vast uncertainty of the world.

You can't solve this by not making promises, promises are casual and ubiquitous.
65. "I'll see you at 6 for dinner" is a promise. Not a very strong one usually, but if you're constantly breaking it people will be pissed at you, and right to be so.
66. Guilt when you break a promise is how you become the sort of person whose word can be trusted. You do not want to give it up.

Sometimes you'll still break promises. That's fine. But if you're doing this on the regular, the guilt should drive you to break fewer.
67. Sometimes this will look like keeping more promises, sometimes this will look like making fewer. Both may be fine.
68. This is another area where it's important to be able to feel moderate guilt. Overwhelming guilt will make you avoidant on the subject and determined to insist that you've done nothing wrong, and really it's the other party who is at fault for being upset.
69. Anyway enough about guilt for now. One of the important questions to me is how does moral progress happen. How do we develop better ethical theories, both explicit and implicit?

I think the most common way is the one Green highlights: communities.
70. A community punishes and rewards certain behaviours. If you want to be a part of a community, you learn to take on it's norms. You internalise them, and make them your own. This adds to your ethical theories and shapes your actions.
71. Another very common way is elaboration: given that this matters to me, what follows? What else should matter to me?
72. An ethical theory comes with normative guidance - what you should and shouldn't do. As you act on that guidance, you learn to do it better. This teaches you new things that matter. For example, not causing harm may matter. In order to achieve this you learn habits.
73. E.g. In a professional context you learn the skills of doing your job well. A programmer might value shipping working code, but they learn to value things like modularity, testing, etc. in aid of this. The value has a justification, but it is also held on its own.
74. Some values have particular properties designed for this sort of elaboration. One type, called maieutic (I believe I have this term from Harry Frankfurt) are values that exist primarily to help birth other values (maieutic means "related to midwifery")
75. E.g. you value having something to do with your life, so you choose to become a doctor, and learn to value being a doctor for itself. If you stopped valuing it, you would no longer satisfy the original maieutic value, but you must value it for itself to do so.
76. Unless you perceive doctoring as intrinsically valuable (there's those positive moral emotions again) it cannot satisfy that maieutic value (having something to do with your life).
77. Another type of value that has a similar but importantly different structure is what Agnes Callard calls a "proleptic" value - something you choose to take on as a value specifically with the goal of elaborating it into valuing what you actually want to value.
78. Learning the norms of a community is a good example of this. You might e.g. you might choose to join a volunteer community to learn to value what they do. This won't always work, but it has a decent shot at doing so.
79. The proleptic value there being that we tend to value the people around us and their good opinion of us, so by choosing to surround ourselves with people who value what we want to value, we proleptically ("proleptic" means "as a stand in") take on their opinion as valuable.
80. Another way we enlarge our values is that we encounter the world and discover that things we never knew about matter to us. You learn to value the environment by encountering nature, the problems of a group by knowing members of that group, etc.
81. But I think the really important ethical progress often doesn't come from enlarging our values at all, but instead refining them. Integrating them with each other, seeing how they all fit together, finding and resolving tensions.
82. As part of this, sometimes you are going to realise that some things you valued are in fact bad. You've internalised a bad vision of how to be in the world that does not fit with your broader ethics, and you have to let it go.
83. How? I think this is another bit that looks more like therapy than philosophy. You examine the belief, you find where it came from and what it's done and doing for you. You show it how you've changed, thank it for its service, and let it go.
84. This is hard and you'll probably have to keep doing it until it sticks.
85. Another crucial point (thanks to @selentelechia for highlighting it) is that you have to actually know what your implicit ethical theories are in order to work with them - either to execute them better, or to change them.


86. What you think you value and what you actually value based on your feelings and actions can be wildly different things. It's generally good to bring these more into line, but it's not always clear which should change, and figuring that out starts with knowing thyself.
87. One of the things that I think blocks ethical growth is that there's too much to care about. This is also why I think we need pluralism. I gave the example of professional skills earlier - every skill you acquire expands what you value in ways that require that skill to reach
88. As a result, in order to have a full and complete view of the moral world, and a fully realised ethical theory, you have to be an expert on everything.

Hate to break it to you, but that's not an option. This frustrates me too.
89. The result is that even in a world in which moral realism is true, you end up with each person having different pieces of the puzzle. There's a sort of ethical compatibilism where any two people's ethics can be unified, but without that unification they seem to conflict.
90. e.g. consider tension between environmentalism and human welfare. In a moral realism universe, there's some ideal balancing answer to questions of this sort which a sufficiently intelligent being could figure out. But we don't have access to such a being or theory.
91. So what happens is that people with different expertise and priorities have to argue it out, without ever acquiring enough of the other's knowledge that they can hold the whole picture in their head.
92. Back to the fear of moral progress: I think every time we encounter a new consideration there's a little bit of "Oh god I have to care about this too?"

You don't. Ethical perfection is unattainable for humans, and this is liberating, because you get to choose when to stop.
93. The downside of liberation is that now you have to deal with freedom. If you get to choose when to stop, you have to choose when to stop.
94. It's not that nobody can tell you how to make this decision - they absolutely can, and will, tell you what you should care about. They will do so loudly and often, inconsistently and without care for your capabilities and wellbeing.
95. Because for everything that you might care about, there will be someone telling you that yes you do have to care about this too.

They're not wrong but their claim is normative not factual. What they mean is that if you don't demonstrate caring, you are in conflict with them.
96. This is, I think, another failing of our public ethical discourse: There is not enough space to simply respond "Yes what you are describing is important, and I am glad you care about it, but I'm not going to."
97. What happens instead is that people ineffectually pretend to care about too many things (because pretending is enough to avoid punishment), rendering them basically unable to act on any of these things.
98. We can all be better than we are, but we can't be infinitely good. You are a finite being, in both resources and time, and deserve an ethics that is well suited to that, that helps you choose who to be in a way that lives up to your standards and respects your finitude.
99. This choice is not free, or solely personal, we do it in collaboration with all of those around us. In the best case scenario we are helped by them, in the worst we are hindered, but they are always a part of it.
100. But it is nevertheless our responsibility to choose who we want to be, and to act on that choice.

And now I leave you with this: Go forth, decide who you want to be, over and over again for the rest of your life. You'll mess up. That's OK. Do better next time. Good luck.
BIBLIOGRAPHY

Voices: The Educational Formation of Conscience - Thomas F. Green

Aspiration - Agnes Callard

Practical Induction - Elijah Millgram

On Virtue Ethics - Rosalind Hursthouse
Also various papers by Millgram, esp. some in "Ethics done Right" and "Varieties of Practical Reasoning" (I lose track of which), and various papers by Harry Frankfurt (esp "On the usefulness of finite ends").
There's plenty more about ethics I've read that probably indirectly informed a lot of this, and it's all a great big melting pot synthesis, but I think those are probably the closest I've got to actual direct inspirations for most of this line of thinking.
Mentions
See All