clock menu more-arrow no yes mobile

Filed under:

How effective altruists ignored risk

To rebuild the movement after the fall of Sam Bankman-Fried, EAs will need to embrace a humbler, more decentralized approach.

In May of this past year, I proclaimed on a podcast that “effective altruism (EA) has a great hunger for and blindness to power. That is a dangerous combination. Power is assumed, acquired, and exercised, but rarely examined.”

Little did I know at the time that Sam Bankman-Fried, — a prodigy and major funder of the EA community, who claimed he wanted to donate billions a year— was engaged in making extraordinarily risky trading bets on behalf of others with an astonishing and potentially criminal lack of corporate controls. It seems that EAs, who (at least according to ChatGPT) aim “to do the most good possible, based on a careful analysis of the evidence,” are also comfortable with a kind of recklessness and willful blindness that made my pompous claims seem more fitting than I had wished them to be.

By that autumn, investigations revealed that Bankman-Fried’s company assets, his trustworthiness, and his skills had all been wildly overestimated, as his trading firms filed for bankruptcy and he was arrested on criminal charges. His empire, now alleged to have been built on money laundering and securities fraud, had allowed him to become one of the top players in philanthropic and political donations. The disappearance of his funds and his fall from grace leaves behind a gaping hole in the budget and brand of EA. (Disclosure: In August 2022, SBF’s philanthropic family foundation, Building a Stronger Future, awarded Vox’s Future Perfect a grant for a 2023 reporting project. That project is now on pause.)

People joked online that my warnings had “aged like fine wine,” and that my tweets about EA were akin to the visions of a 16th-century saint. Less flattering comments pointed out that my assessment was not specific enough to be passed as divine prophecy. I agree. Anyone watching EA becoming corporatized over the last years (the Washington Post fittingly called it “Altruism, Inc.” ) would have noticed them becoming increasingly insular, confident, and ignorant. Anyone would expect doom to lurk in the shadows when institutions turn stale.

On Halloween this past year, I was hanging out with a few EAs. Half in jest, someone declared that the best EA Halloween costume would clearly be a crypto-crash — and everyone laughed wholeheartedly. Most of them didn’t know what they were dealing with or what was coming. I often call this epistemic risk: the risk that stems from ignorance and obliviousness, the catastrophe that could have been avoided, the damage that could have been abated, by simply knowing more. Epistemic risks contribute ubiquitously to our lives: We risk missing the bus if we don’t know the time, we risk infecting granny if we don’t know we carry a virus. Epistemic risk is why we fight coordinated disinformation campaigns and is the reason countries spy on each other.

Still, it is a bit ironic for EAs to have chosen ignorance over due diligence. Here are people who (smugly at times) advocated for precaution and preparedness, who made it their obsession to think about tail risks, and who doggedly try to predict the future with mathematical precision. And yet, here they were, sharing a bed with a gambler against whom it was apparently easy to find allegations of shady conduct. The affiliation was a gamble that ended up putting their beloved brand and philosophy at risk of extinction.

How exactly did well-intentioned, studious young people once more set out to fix the world only to come back with dirty hands? Unlike others, I do not believe that longtermism — the EA label for caring about the future, which particularly drove Bankman-Fried’s donations — or a too-vigorous attachment to utilitarianism is the root of their miscalculations. A postmortem of the marriage between crypto and EA holds more generalizable lessons and solutions. For one, the approach of doing good by relying on individuals with good intentionsa key pillar of EA — appears ever more flawed. The collapse of FTX is a vindication of the view that institutions, not individuals, must shoulder the job of keeping excessive risk-taking at bay. Institutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty.

The epistemics of risk-taking

The signature logo of EA is a bleedingly clichéd heart in a lightbulb. Their brand portrays their unique selling point of knowing how to take risks and do good. Risk mitigation is indeed partly a matter of knowledge. Understanding which catastrophes might occur is half the battle. Doing Good Better — the 2015 book on the movement by Will MacAskill, one of EA’s founding figures — wasn’t only about doing more. It was about knowing how to do it and to therefore squeeze more good from every unit of effort.

The public image of EA is that of a deeply intellectual movement, attached to the University of Oxford brand. But internally, a sense of epistemic decline became palatable over recent years. Personal connections and a growing cohesion around an EA party line had begun to shape the marketplace of ideas.

Pointing this out seemed paradoxically to be met with appraisal, agreement, and a refusal to do much about it. Their ideas, good and bad, continued to be distributed, advertised, and acted upon. EA donors, such as Open Philanthropy and Bankman-Fried, funded organizations and members in academia, like the Global Priorities Institute or the Future of Humanity Institute; they funded think tanks, such as the Center for Security and Technology or the Centre for Long-Term Resilience; and journalistic outlets such as Asterisk, Vox Future Perfect, and, ironically, the Law & Justice Journalism project. It is surely effective to pass EA ideas across those institutional barriers, which are usually intended to restrain favors and biases. Yet such approaches sooner or later incur intellectual rigor and fairness as collateral damage.

Disagreeing with some core assumptions in EAs became rather exhausting. By 2021, my co-author Luke Kemp of the Centre for the Study of Existential Risk at the University of Cambridge and I thought that much of the methodology used in the field of existential risk — a field funded, populated, and driven by EAs — made no sense. So we attempted to publish an article titled “Democratising Risk,” hoping that criticism would give breathing space to alternative approaches. We argued that the idea of a good future as envisioned in Silicon Valley might not be shared across the globe and across time, and that risk had a political dimension. People reasonably disagree on what risks are worth taking, and these political differences should be captured by a fair decision process.

The paper proved to be divisive: Some EAs urged us not to publish, because they thought the academic institutions we were affiliated with might vanish and that our paper could prevent vital EA donations. We spent months defending our claims against surprisingly emotional reactions from EAs, who complained about our use of the term “elitist” or that our paper wasn’t “loving enough.” More concerningly, I received a dozen private messages from EAs thanking me for speaking up publicly or admitting, as one put it: “I was too cowardly to post on the issue publicly for fear that I will get ‘canceled.’”

Maybe I should not have been surprised about the pushback from EAs. One private message to me read: “I’m really disillusioned with EA. There are about 10 people who control nearly all the ‘EA resources.’ However, no one seems to know or talk about this. It’s just so weird. It’s not a disaster waiting to happen, it’s already happened. It increasingly looks like a weird ideological cartel where, if you don’t agree with the power holders, you’re wasting your time trying to get anything done.”

I would have expected a better response to critique from a community that, as one EA aptly put it to me, “incessantly pays epistemic lip service.” EAs talk of themselves in third person, run forecasting platforms, and say they “update” rather than “change” their opinions. While superficially obsessed with epistemic standards and intelligence (an interest that can take ugly forms), real expertise is rare among this group of smart but inexperienced young people who only just entered the labor force. For reasons of “epistemic modesty” or a fear of sounding stupid, they often defer to high-ranking EAs as authority. Doubts might reveal that they just didn’t understand the ingenuous argumentation for fate determined by technology. Surely, EAs must have thought, the leading brains of the movement will have thought through all the details?

Last February, I proposed to MacAskill — who also works as an associate professor at Oxford, where I’m a student — a list of measures that I thought could minimize risky and unaccountable decision-making by leadership and philanthropists. Hundreds of students across the world associate themselves with the EA brand, but consequential and risky actions taken under its banner — such as the well-resourced campaign behind MacAskill’s book What We Owe the Future, attempts to help Musk buy Twitter, or funding US political campaigns — are decided upon by the few. This sits well neither with the pretense of being a community nor with healthy risk management.

Another person on the EA forum messaged me saying: “It is not acceptable to directly criticize the system, or point out problems. I tried and someone decided I was a troublemaker that should not be funded. [...] I don’t know how to have an open discussion about this without powerful people getting defensive and punishing everyone involved. [...] We are not a community, and anyone who makes the mistake of thinking that we are, will get hurt.”

My suggestions to MacAskill ranged from modest calls to incentivize disagreement with leaders like him to conflict of interest reporting and portfolio diversifications away from EA donors. They included incentives for whistleblowing and democratically controlled grant-making, both of which likely would have reduced EA’s disastrous risk exposure to Bankman-Fried’s bets. People should have been incentivized to warn others. Enforcing transparency would have ensured that more people could have known about the red flags that were signposted around his philanthropic outlet.

These are standard measures against misconduct. Fraud is uncovered when regulatory and competitive incentives (be it rivalry, short-selling, or political assertiveness) are tuned to search for it. Transparency benefits risk management, and whistleblowing plays an essential role in historic discoveries of misconduct by big bureaucratic entities.

Institutional incentive-setting is basic homework for growing organizations, and yet, the apparent intelligentsia of altruism seems to have forgotten about it. Maybe some EAs, who fancied themselves “experts in good intention,” thought such measures should not apply to them.

We also know that standard measures are not sufficient. Enron’s conflict of interest reporting, for instance, was thorough and thoroughly evaded. They would certainly not be sufficient for the longtermist project, which, if taken seriously, would mean EAs trying to shoulder risk management for all of us and our ancestors. We should not be happy to give them this job as long as their risk estimates are done in insular institutions with epistemic infrastructures that are already beginning to crumble. My proposals and research papers broadly argued that increasing the number of people making important decisions will on average reduce risk, both to the institution of EA and to those affected by EA policy. The project of managing global risk is — by virtue of its scale ­— tied to using distributed, not concentrated, expertise.

After I spent an hour in MacAskill’s office arguing for measures that would take arbitrary decision power out of the hands of the few, I sent one last pleading (and inconsequential) email to him and his team at the Forethought Foundation, which promotes academic research on global risk and priorities, and listed a few steps required to at least test the effectiveness and quality of decentralized decision-making — especially in respect to grant-making.

My academic work on risk assessments had long been interwoven with references to promising ideas coming out of Taiwan, where the government has been experimenting with online debating platforms to improve policymaking. I admired the works of scholars, research teams, tools, organizations, and projects, which amassed theory, applications, and data showing that more and more diverse groups of people tend to make better choices. Those claims have been backed by hundreds of successful experiments on inclusive decision-making. Advocates had more than idealism — they had evidence that scaled and distributed deliberations provided more knowledge-driven answers. They held the promise of a new and higher standard for democracy and risk management. EA, I thought, could help test how far the promise would go.

I was entirely unsuccessful in inspiring EAs to implement any of my suggestions. MacAskill told me that there was quite a diversity of opinion among leadership. EAs patted themselves on the back for running an essay competition on critiques against EA, left 253 comments on my and Luke Kemp’s paper, and kept everything that actually could have made a difference just as it was.

Morality, a shape-shifter

Sam Bankman-Fried may have owned a $40 million penthouse, but that kind of wealth is an uncommon occurrence within EA. The “rich” in EA don’t drive faster cars, and they don’t wear designer clothes. Instead, they are hailed as being the best at saving unborn lives.

It makes most people happy to help others. This altruistic inclination is dangerously easy to repurpose. We all burn for an approving hand on our shoulder, the one that assures us that we are doing good by our peers. The question is, how badly do we burn for approval? What will we burn to the ground to attain it?

If your peers declare “impact” as the signpost of being good and worthy, then your attainment of what looks like ever more “good-doing” is the locus of self-enrichment. Being the best at“good-doing” is the status game. But once you have status, your latest ideas of good-doing define the new rules of the status game.

EAs with status don’t get fancy, shiny things, but they are told that their time is more precious than others. They get to project themselves for hours on the 80,000 Hours podcast, their sacrificial superiority in good-doing is hailed as the next level of what it means to be “value-aligned,” and their often incomprehensible fantasies about the future are considered too brilliant to fully grasp. The thrill of beginning to believe that your ideas might matter in this world is priceless and surely a little addictive.

We do ourselves a disservice by dismissing EA as a cult. Yes, they drink liquid meals, and do “circling,” a kind of collective, verbalized meditation. Most groups foster group cohesion. But EA is a particularly good example that shows how our idea about what it means to be a good person can be changed. It is a feeble thing, so readily submissive to and forged by raw status and power.

Doing right by your EA peers in 2015 meant that you check out a randomized controlled trial before donating 10 percent of your student budget to combating poverty. I had always refused to assign myself the cringe-worthy label of “effective altruist,” but I too had my few months of a love affair with what I naively thought was my generation’s attempt to apply science to “making the world a better place.” It wasn’t groundbreaking — just commonsensical.

But this changed fast. In 2019, I was leaked a document circulating at the Centre for Effective Altruism, the central coordinating body of the EA movement. Some people in leadership positions were testing a new measure of value to apply to people: a metric called PELTIV, which stood for “Potential Expected Long-Term Instrumental Value.” It was to be used by CEA staff to score attendees of EA conferences, to generate a “database for tracking leads” and identify individuals who were likely to develop high “dedication” to EA — a list that was to be shared across CEA and the career consultancy 80,000 Hours. There were two separate tables, one to assess people who might donate money and one for people who might directly work for EA.

Individuals were to be assessed along dimensions such as “integrity” or “strategic judgment” and “acting on own direction,” but also on “being value-aligned,” “IQ,” and “conscientiousness.” Real names, people I knew, were listed as test cases, and attached to them was a dollar sign (with an exchange rate of 13 PELTIV points = 1,000 “pledge equivalents” = 3 million “aligned dollars”).

What I saw was clearly a draft. Under a table titled “crappy uncalibrated talent table,” someone had tried to assign relative scores to these dimensions. For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120. Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.

The list showed just how much what it means to be “a good EA” has changed over the years. Early EAs were competing for status by counting the number of mosquito nets they had funded out of their own pocket; later EAs competed on the number of machine learning papers they co-authored at big AI labs.

When I confronted the instigator of PELTIV, I was told the measure was ultimately discarded. Upon my request for transparency and a public apology, he agreed the EA community should be informed about the experiment. They never were. Other metrics such as “highly engaged EA” appear to have taken its place.

The optimization curse

All metrics are imperfect. But a small error between a measure of that which is good to do and that which is actually good to do suddenly makes a big difference fast if you’re encouraged to optimize for the proxy. It’s the difference between recklessly sprinting or cautiously stepping in the wrong direction. Going slow is a feature, not a bug.

It’s curious that effective altruism — the community that was most alarmist about the dangers of optimization and bad metrics in AI — failed to immunize itself against the ills of optimization. Few pillars in EA stood as constant as the maxim to maximize impact. The direction and goalposts of impact kept changing, while the attempt to increase velocity, to do more for less, to squeeze impact from dollars, remained. In the words of Sam Bankman-Fried: “There’s no reason to stop at just doing well.”

The recent shift to longtermism has gotten much of the blame for EA’s failures, but one does not need to blame longtermism to explain how EA, in its effort to do more good, might unintentionally do some bad. Take their first maxim and look no further: Optimizing for impact provides no guidance on how one makes sure that this change in the world will actually be positive. Running at full speed toward a target that later turns out to have been a bad idea means you still had impact — just not the kind you were aiming for. The assurance that EA will have positive impact rests solely on the promise that their direction of travel is correct, that they have better ways of knowing what the target should be. Otherwise, they are optimizing in the dark.

That is precisely why epistemic promise is baked into the EA project: By wanting to do more good on ever bigger problems, they must develop a competitive advantage in knowing how to choose good policies in a deeply uncertain world. Otherwise, they simply end up doing more, which inevitably includes more bad. The success of the project was always dependent on applying better epistemic tools than could be found elsewhere.

Longtermism and expected value calculations merely provided room for the measure of goodness to wiggle and shape-shift. Futurism gives rationalization air to breathe because it decouples arguments from verification. You might, by chance, be right on how some intervention today affects humans 300 years from now. But if you were wrong, you’ll never know — and neither will your donors. For all their love of Bayesian inference, their endless gesturing at moral uncertainty, and their norms of superficially signposting epistemic humility, EAs became more willing to venture into a far future where they were far more likely to end up in a space so vast and unconstrained that the only feedback to update against was themselves.

I am sympathetic to the type of greed that drives us beyond wanting to be good to instead be certain that we are good. Most of us have it in us, I suspect. The uncertainty over being good is a heavy burden to carry. But a highly effective way to reduce the psychological dissonance of this uncertainty is to minimize your exposure to counter-evidence, which is another way of saying that you don’t hang out with people that EAs call “non-aligned.” Homogeneity is the price they pay to escape the discomfort of an uncertain moral landscape.

There is a better way.

The locus of blame

It should be the burden of institutions, not individuals, to face and manage the uncertainty of the world. Risk reduction in a complex world will never be done by people cosplaying perfect Bayesians. Good reasoning is not about eradicating biases, but about understanding which decision-making procedures can find a place and function for our biases. There is no harm in being wrong: It’s a feature, not a bug, in a decision procedure that balances your bias against an opposing bias. Under the right conditions, individual inaccuracy can contribute to collective accuracy.

I will not blame EAs for having been wrong about the trustworthiness of Bankman-Fried, but I will blame them for refusing to put enough effort into constructing an environment in which they could be wrong safely. Blame lies in the audacity to take large risks on behalf of others, while at the same time rejecting institutional designs that let ideas fail gently.

EA contains at least some ideological incentive to let epistemic risk slide. Institutional constraints, such as transparency reports, external audits, or testing big ideas before scaling, are deeply inconvenient for the project of optimizing toward a world free of suffering.

And so they daringly expanded a construction site of an ideology, which many knew to have gaping blind spots and an epistemic foundation that was beginning to tilt off balance. They aggressively spent large sums publicizing half-baked policy frameworks on global risk, aimed to teach the next generation of high school students, and channeled hundreds of elite graduates to where they thought they needed them most. I was almost one of them.

I was in my final year as a biology undergraduate in 2018, when money was still a constraint, and a senior EA who had been a speaker at a conference I had attended months prior suggested I should consider relocating across the Atlantic to trade cryptocurrency for the movement and its causes. I loved my degree, but it was nearly impossible not to be tempted by the prospects: Trading, they said, could allow me personally to channel millions of dollars into whatever causes I cared about.

I agreed to be flown to Oxford, to meet a person named Sam Bankman-Fried, the energetic if distracted-looking founder of a new company called Alameda. All interviewees were EAs, handpicked by a central figure in EA.

The trading taster session on the following day was fun at first, but Bankman-Fried and his team were giving off strange vibes. In between ill-prepared showcasing and haphazard explanations, they would go to sleep for 20 minutes or gather semi-secretly in a different room to exchange judgments about our performance. I felt like a product, about to be given a sticker with a PELTIV score. Personal interactions felt as fake as they did during the internship I once completed at Goldman Sachs — just without the social skills. I can’t remember anyone from his team asking me who I was, and halfway through the day I had fully given up on the idea of joining Alameda. I was rather baffled that EAs thought I should waste my youth in this way.

Given what we now know about how Bankman-Fried led his companies, I am obviously glad to have followed my vaguely negative gut feeling. I know many students whose lives changed dramatically because of EA advice. They moved continents, left their churches, their families, and their degrees. I know talented doctors and musicians who retrained as software engineers, when EAs began to think working on AI could mean your work might matter in “a predictable, stable way for another ten thousand, a million or more years.”

My experience now illustrates what choices many students were presented with and why they were hard to make: I lacked rational reasons to forgo this opportunity, which seemed daring or, dare I say, altruistic. Education, I was told, could wait, and in any case, if timelines to achieving artificial general intelligence were short, my knowledge wouldn’t be of much use.

In retrospect, I am furious about the presumptuousness that lay at the heart of leading students toward such hard-to-refuse, risky paths. Tell us twice that we are smart and special and we, the young and zealous, will be in on your project.

Epistemic mechanism design

I care rather little about the death or survival of the so-called EA movement. But the institutions have been built, the believers will persist, and the problems they proclaim to tackle — be it global poverty, pandemics, or nuclear war — will remain.

For those inside of EA who are willing to look to new shores: Make the next decade in EA be that of the institutional turn. The Economist has argued that EAs now “need new ideas.” Here’s one: EA should offer itself as the testing ground for real innovation in institutional decision-making.

It seems rather unlikely indeed that current governance structures alone will give us the best shot at identifying policies that can navigate the highly complex global risk landscape of this century. Decision-making procedures should be designed such that real and distributed expertise can affect the final decision. We must identify what institutional mechanisms are best suited to assessing and choosing risk policies. We must test what procedures and technologies can help aggregate biases to wash out errors, incorporate uncertainty, and yield robust epistemic outcomes. The political nature of risk-taking must be central to any steps we take from here.

Great efforts, like the establishment of a permanent citizen assembly in Brussels to evaluate climate risk policies or the use of machine learning to find policies that more people agree with, are already ongoing. But EAs are uniquely placed to test, tinker, and evaluate more rapidly and experimentally: They have local groups across the world and an ecosystem of independent, connected institutions of different sizes. Rigorous and repeated experimentation is the only way in which we can gain clarity about where and when decentralized decision-making is best regulated by centralized control.

Researchers have amassed hundreds of design options for procedures that vary in when, where, and how they elicit experts, deliberate, predict, and vote. There are numerous available technological platforms, such as loomio, panelot, decidim, rxc voice, or pol.is, that facilitate online deliberations at scale and can be adapted to specific contexts. New projects, like the AI Objectives Institute or the Collective Intelligence Project, are brimming with startup energy and need a user base to pilot and iterate with. Let EA groups be a lab for amassing empirical evidence behind what actually works.

Instead of lecturing students on the latest sexy cause area, local EA student chapters could facilitate online deliberations on any of the many outstanding questions about global risk and test how the integration of large language models affects the outcome of debates. They could organize hackathons to extend open source deliberation software and measure how proposed solutions changed relative to the tools that were used. EA think tanks, such as the Centre for Long-Term Resilience, could run citizen assemblies on risks from automation. EA career services could err on the side of providing information rather than directing graduates: 80,000 Hours could manage an open source wiki on different jobs, available for experts in those positions to post fact-checked, diverse, and anonymous advice. Charities like GiveDirectly could build on their recipient feedback platform and their US disaster relief program, to facilitate an exchange of ideas between beneficiaries about governmental emergency response policies that might hasten recovery.

Collaborative, not individual, rationality is the armor against a gradual and inevitable tendency of becoming blind to an unfolding catastrophe. The mistakes made by EAs are surprisingly mundane, which means that the solutions are generalizable and most organizations will benefit from the proposed measures.

My article is clearly an attempt to make EA members demand they be treated less like sheep and more like decision-makers. But it is also a question to the public about what we get to demand of those who promise to save us from any evil of their choosing. Do we not get to demand that they fulfill their role, rather than rule?

The answers will lie in data. Open Philanthropy should fund a new organization for research on epistemic mechanism design. This central body should receive data donations from a decade of epistemic experimentalism in EA. It would be tasked with making this data available to researchers and the public in a form that is anonymized, transparent, and accessible. It should coordinate, host, and connect researchers with practitioners and evaluate results across different combinations, including variable group sizes, integrations with discussion and forecasting platforms, and expert selections. It should fund theory and software development, and the grants it distributes could test distributed grant-giving models.

Reasonable concerns might be raised about the bureaucratization that could follow the democratization of risk-taking. But such worries are no argument against experimentation, at least not until the benefits of outsourced and automated deliberation procedures have been exhausted. There will be failures and wasted resources. It is an inevitable feature of applying science to doing anything good. My propositions offer little room for the delusions of optimization, instead aiming to scale and fail gracefully. Procedures that protect and foster epistemic collaboration are not a “nice to have.” They are a fundamental building block to the project of reducing global risks.

One does not need to take my word for it: The future of institutional, epistemic mechanism designs will tell us how exactly I am wrong. I look forward to that day.

Carla Zoe Cremer is a doctoral student at the University of Oxford in the department of psychology, with funding from the Future of Humanity Institute (FHI). She studied at ETH Zurich and LMU in Munich and was a Winter Scholar at the Centre for the Governance of AI, an affiliated researcher at the Centre for the Study of Existential Risk at the University of Cambridge, a research scholar (RSP) at the FHI in Oxford, and a visitor to the Leverhulme Centre for the Future of Intelligence in Cambridge.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.