Thread
I've got a few things to say about all this SBF/FTX/EA mishegas, and I guess I might as well say them before this website dies. I want to pivot off this characteristically excellent piece from @EricLevitz. nymag.com/intelligencer/2022/11/effective-altruism-sam-bankman-fried-sbf-ftx-crypto.html
Levitz's piece gets at the essential question (why not do what SBF did?) but then dances around & never quite lands on what I think is the most important answer to it.

So. Start w/ the fact that EA is basically an attempt to put utilitarianism to use in philanthropy.
The idea is to do the most good per dollar spent. As many people (including @mattyglesias) have argued, this basic heuristic is sensible and badly needed in the philanthropic sector, where there's a *lot* of feel-good spending w/ mediocre results. So far so good.
The tricky part comes when you treat utilitarianism less as a kind of rough heuristic and more as a kind of logical, quasi-mathematical rule. As many many philosophers have pointed out over the years, that leads to some highly counter-intuitive results.
Some folks point to counter-intuitive results as a reason to rejected utilitarianism; others point to those results & say "tough shit, logic is logic." So you have to accept, eg, that you can't buy your kids nice things for xmas b/c the money would save more lives in Africa.
EA has recently run into a version of this dilemma. "Longtermism" is the idea that there will be *many more future lives* than there are lives today, so action to prevent, say, a small risk of extinction trumps action to address even large current problems.
Oh, I meant to say, even prior to longtermism, you have the more prosaic result that, on a pure utility basis, it's better to make a shitload of money & donate it to EA than it is to go work for some charity. Thus "earn to give."
So SBF was convinced that becoming a hedge fund dude & contributing to efforts to forestall the AI apocalypse is how he could maximize his own utility. He may even have convinced himself that being a ponzi scammer & giving the money to EA maximized his impact.
Now (sorry, all that was prelude), I would put it to you: IF he is correct that saving countless future people swamps other considerations, AND that AI is the biggest threat to the species, THEN he was correct that literally anything he did to make money for EA is permissible.
IOW, if his expected-value calculations were correct, then SBF was perfectly justified in doing what he did. After all, what is a few billion in losses for some billionaires relative to the possible existence & flourishing of 10s of billions of future humans? Nothing!
Now, most everybody wants to avoid this conclusion, including folks in EA. So McKaskill et al. spend a lot of time in Levitz's piece basically trying to refute SBF's expected-value calculations. "What if you're caught & your crime casts doubt on all of EA?" etc. etc.
That's the part I found frustrating, because to me the problem with SBF's calculations were less moral than *epistemic*. It's less about the substance of this particular equation than the general practice of trying to reason through large, complex systems over long time periods.
Two things:
1. Humans have radically limited information & have consistently proven AWFUL at predicting the future.
2. Humans are very, very, very, very good at bullshitting themselves, ie, "motivated reasoning" that leads to conclusions congenial to one's priors.
It follows that the bigger & more complex the systems you're reasoning about, and the farther out into the future your reasoning extends, the more likely you are to be wrong, & not just wrong, but wrong in ways that flatter your priors & identity.
I always feel like this fundamental fact gets underplayed in discussions of EA or various other "rationalist" communities. The tendency to bullshit oneself is basically ... undefeated. It gets everyone eventually, even the most self-disciplined of thinkers.
If we humans overcome this at all, it is not through individuals Reasoning Harder or learning lists of common logical fallacies or whatever. If we achieve reason at all (which is rarely), we do so *socially*, together, as communities of inquiry.
We grope toward reason & truth together, knowing that no individual is free of various epistemic weaknesses, but perhaps together, reviewing one another's work, pressing & challenging one another, adhering to shared epistemic standards, we can stumble a little closer.
That's what science is, insofar at it works -- not some isolated genius thinking really hard, but a *structured community of inquiry* that collectively zigs & zags its way in the right direction. Any one of us will almost certainly succumb to self-BSing. Together? Sometimes not.
The best an individual can do in this circumstance is struggle to maintain intellectual humility & "negative capacity" (the ability to sit in uncertainty w/out itching after resolution), as described in this lovely @willwilkinson post. modelcitizen.substack.com/p/before-truth-curiosity-negative-capability
Intellectual humility would suggest that, if our reasoning leads us to a place where we've justified ourselves acting in ways that violate most people's intuitions & produce proximate suffering, we should be *very, very suspicious that we have bullshitted ourselves*.
We might not be able to identify the flaw in our reasoning -- you can't see the back of your head -- but in general, the more the conclusions posit future, abstract benefits to justify proximate cruelty & suffering, the more suspect they are.
So SBF might not be able to find any flaw in his longtermist reasoning, but he should have taken much much more seriously that a) the mid-to-distant future is wildly unpredictable & the idea that AI is or will be the top threat is just this side of a guess, and ...
b) when your reasoning has put you in a Bahamas villa, out of your head on drugs, scamming people out of money ... you should take it as a near-certainty that you have bullshit yourself somewhere along the way. "Rationality demands I indulge my basest desires." Probably not!
This is why I have such an allergic reaction to people who claim to be Reasonable (while you, of course, are being Emotional). It's not that there's anything wrong with reason. It's that the people who fashion themselves reasonable are some of the most prone ...
... to blind spots, unexamined cultural prejudices, and just general self-bullshitting. Nothing will mislead you faster than brash epistemic self-confidence. Few people have done more damage than those convinced they are acting purely on Data & Reason.
Anyway (sorry this turned out so f'ing long), this is how I've always resolved the "repugnant conclusions" of utilitarianism taken too far. Even as a utilitarian, if you properly understand the depth & deviousness of the human tendency toward self-bullshitting ...
... you will be suspicious of sweeping conclusions that countenance dickheadedness & cruelty in supposed pursuit of "larger" long-term utility. You will know that you, like everyone, are probably wrong about a bunch of shit, even if you don't know what.
So you will probably end up with some version of: "utilitarianism is a good general heuristic, but until we know lots more & are able to accurately predict lots more, we should probably default to treating one another deontologically."
In other words, thanks to our epistemic limitations, a "dumb" heuristic that just says "when in doubt, be decent" will probably generate more long-term utility than a bunch of fancy math-like expected-value calculations. We want *resilient* ethics, not *optimized* ethics.
This is what unnerves me about SBF, EA, the "rationalist" community, & all similar efforts. "Reason is good." Yes! "My friends & I are the most reasonable." Probably not! You're probably blowing smoke up each other's asses, justifying each other's priors.
An epistemically modest EA would say something like: "let's shift some funding away from obviously ineffective programs to programs with proven records of effectiveness. Let's do more of what works." Who could argue? Not me!
It's when you get out beyond that, into the ether where eg "the value of all future humans" & "compound interest over centuries" become variables in your equation, that your epistemic-humility alarm should start clanging.
You -- a small flesh sack in a vast universe -- are unlikely to be perfectly Rational or to spend your money in a way that is perfectly Utilitarian. The best you can do -- what SBF should have aimed for -- is to do a little better than what came before. That's enough. </fin>
Mentions
See All