effective altruism and its future
Epistemic status: not everyone necessarily agrees with this
As You Know, Bob,
Sam Bankman-Fried was recently revealed to have been running a multibillion dollar fraud and probably burned up the entirety of the wealth that people had entrusted to his exchange. Whoops.
Financial fraud is an old story and not really worth writing about, except for the fact that SBF is in his own way Grey Tribe; and except for the fact that he was a major donor to Effective Altruism causes, and perhaps the most prominent advocate of such giving in the minds of the general public. SBF, in a sense, was EA’s face to the normies. And now the main public conception of EA runs along the lines “that weird cult that the Bahamas guy was part of where they kept their wives in common and did drugs all day, right?”
I’m not writing this post to assign moral culpability. If I were a consequentialist I probably might; but as a virtue ethicist I can only praise your iustitia while perhaps side-eying your prudentia and temperantia.
I’m also not trying to say I told you so, although now that I mention it I kind of did. Unfortunately most of these “predictions” came in the form of bitter complaints about rationalists selling out, rather than actionable feedback.
In this post I will try to do better by:
Identifying the specific dynamics that make me skeptical of EA’s long-term prospects of substantially outperforming the incumbent charity industry, and
Offering some suggestions about how this outcome might be avoided or mitigated.
I also want to explicitly note I won’t be litigating questions of “do EA’s goals and means even make sense.” I’m not sure that they do but I would like them to succeed at their general mission, even where I don’t see myself aligned with them and even where I’m skeptical of the long-term value.
Why Should Anyone Listen to You
They probably shouldn’t. Here are some reasons you might ignore every word of this post:
I’ve run a total of zero charities in my life.
I am not deep in the effective altruist movement, and there is almost surely context that I’m missing.
I’m skeptical that EA is a good idea, and our values may not be aligned in an important way I’ve neglected to consider.
I am actually mad about everything that’s happened, which definitely makes a person functionally dumber (even more than usual, in my case).
Nevertheless, I may have some value to add in one of my areas of moderate competence: being aware that predatory social actors exist and want to play me like a fiddle. This is one area where, in my opinion, EA and rationalist organizations could stand to revise their practices.
While I only do data work for internal audiences today, in the past I worked in media and government influence operations at large firms.I won't get into the details of that here because it would probably generate liability, but I want to say emphatically: you probably have no idea how pervasive and developed these operations are, and how many ways exist to shape information and social spaces.
One reason I don’t take money from people is because it would change my behavior toward those people.
One reason I have a hard rule of not replying to DMs is that I occasionally get DMs from people who are natural adepts in social manipulation and I try to be realistic about my ability to fend off that kind of attack.
One reason I cultivate disreputability is to reduce the extent to which I might become useful as a tool.
One reason I ban journalists from my parties is because I’ve worked with journalists in a professional setting.
It is very hard to stay honest in this world.
Well I Don’t Like Where This is Going
The origins of EA—and here I’m using EA as an umbrella term to refer to a nebulous constellation of organizations, charities, people, and ideas—are somewhat confusing. Philosophers shilling books were involved; as a courtesy I am going to ignore them. Instead, I want to offer a mixed paean to GiveWell.
The idea was simple. A couple of hedge-funders with an interest in philanthropy noticed that most charities were very badly run by corporate standards and that charities were generally clueless about the extent to which they were accomplishing anything through the use of donations. In attempt to improve this situation, they founded an organization to promulgate best practices in the operation of philanthropies and to rate charities on their ROI.
Public ratings serve two purposes. First, they allow donors who are motivated by the idea of achieving outcomes to more easily direct their money to organizations that are efficiently generating those outcomes. Second, to the extent that donors take outcomes into account when making donations, it pressures charities to adopt better practices in pursuit of their missions.
This is the core and best part of EA.
“Charities were, from an efficiency standpoint, predominantly useless money pits” is a great place to start if you want to do better at attaining specific aims in the world by giving away your money. Unfortunately it’s also an easy place to stop: “Charities are bad at what they do; we’re very smart; we can surely do better at helping people!” Yes, but . . . why are they so bad?
My dad’s a psychometrician. Twenty years ago we had a discussion about “rationality.” Dad’s position was that everyone thinks on some level that they’re acting rationally, and while you can write off behaviors as lunatic, “just plain crazy” isn’t much of a model. It’s often more interesting to pause and—while granting that a given behavior is superficially unusual and probably suboptimal—consider the set of beliefs and circumstances that would cause a person to think that the behavior made sense.
Organizations are the same. Generally, if you see players in an industry acting in a way that’s odd to you, there’s probably a very good reason—from their perspective—that they’re behaving that way.
So . . .
I want to propose that pre-Givewell charities were probably extremely effective at what they really did. That is to say: they competed against other charities in a competitive market to generate revenue to pay the sinecures of high-status people who helped other high-status people optimize their taxes and maintain their reputations as upstanding members of the community. That’s it.
My further expectation is that this is a stable point for charities because just as girls don’t like boys, the actual effectiveness of a charity is of second-order importance at best to the donor class. To the extent that money—real money—flows from such people, EA priorities will inexorably align with what they want, and anyone who resists this will be pushed out. You have data? That’s swell. Donors are how charitable organizations make payroll. You want to stop malaria on the grounds of maximum impact per dollar spent? Actually, this week the hot thing is criminal justice reformin a first world country—why don’t you go rationalize that cause for us?
That Seems Quite Cynical
Yes. I am literally an economist and I am sorry to report that this is the way of the world. Villains and purported saints aren’t enemies; they’re natural mutualists.
Go read about the history of charitable giving to monasteries in the Middle Ages if you like; or the organizational story of the March of Dimes. Peruse works of Tom Wolfe. Radical Chic is the classic story of the social role of charity among the donor class:
This puts me in the mind of another party—but ah! I’m getting ahead of myself.
This Time Will Be Different
No it won’t, and it’s probably too late.
David Chapman’s classic model of subcultural evolution is Geeks, MOPs, and Sociopaths. It outlines a pattern repeated time and time again in niche subcultures that make it big. This is how subcultures live and die:
Genius creators make a discovery. Their fans get them. Together, they become something.
Normies realize something exciting is happening and join for the party, even if they don’t quite understand what’s going on.
Sociopaths smell opportunity, swoop to exploit it, and outcompete the natives.
If you’re in EA, I’ll give you a moment to identify the dramatis personae. I don’t personally know the names; it doesn’t matter. The roles are the same.
I mentioned a second party earlier. The New York Times—yes, that New York Times—was kind enough to take notes.
Once more, identifying the roles played by each participant is left as an exercise for the reader. Here’s a freebie: David Shorliterally makes a living running an organization that runs and assists mass influence operations.
When EA got rich and got status, its days as an effective institution were numbered, because money and status are transferrable resources. Sooner or later one of the groups with very different values that’s working to figure out how to strip mine EA is going to succeed.
Case Study: Why Did It Have to Be The Clintons
How is this playing out in EA? The way it usually does. Simple example.
In August, GiveWell gave $10M to the Clinton Health Access Initiative. In an admirably open post, they note that they’re excited to partner with “a sizable footprint” and “established relationships with country governments.” Yes: that is one way to describe an organization on whose board both Bill and Chelsea sit, and which is funded by a clearinghouse for legal bribes to the Clintons. Why is GiveWell giving them money?
It’s possible they’re a useful charity. But it may also have to do with the fact that the CHAI’s current CEO, Neil Buddy Shah, was hired away from his previous role as a managing director at GiveWell. GiveWell’s CEO had this to say about it:
CHAI is gaining a great leader in Buddy. But perhaps more importantly for GiveWell and our supporters, this appointment is a signal that effective giving is contributing to more corners of the global health landscape than ever before. Buddy is a strong champion of impact maximization, and I am excited that he will apply this lens in his new role.
To GiveWell’s credit, they flagged this in their CHAI funding announcement. And I’m sure that Shah is a man of high principles and as autistically committed to optimization as the rest of us. Nevertheless, this kind of revolving door hiring and kick-backing is how capture works.
Case Study: SBF
Consider once more the case of SBF. He was not just a major EA donor; the man spread the wealth around.
It’s well-known that SBF was a massive Democratic Party donor. What seems under-appreciated to me is that his donations were very well targeted to individuals who had direct oversight of his industry. Per Jacobin,here are some of the candidates to whom SBF donated money, and some of their salient committee assignments.
Shontel Brown (D-OH). House Agriculture Committee, which for historical reasons oversees the regulators of commodities futures trading, which itself was a candidate for crypto regulation;
Ritchie Torres (D-NY). Financial Services Committee, including Subcommittee on Consumer Protection and Financial Institutions. At one point, SBF ran a fundraiser for Torres jointly with . . . our old acquaintance David Shor;
Sean Casten (D-IL) actually employed SBF’s brother as a staffer before he ascended to the Digital Assets Working Group on the House Financial Services Committee;
Patrick Maloney (D-NY), who also sits on Agriculture and introduced legislation to push crypto regulation to a weaker regulatory authority;
Lucy McBath (D-GA), who amusingly sits on the Judiciary Subcommittee on Antitrust, Commercial, and Administrative Law; its purview amusingly includes bankruptcy law. I don’t think this is damning; I just want to appreciate this little joke from the simulators.
SBF was especially active in crypto regulation, helping found a crypto lobbying outfit called ADAM. Helpfully:
. . . every member of the association agrees to adhere to the ADAM Code of Conduct — a set of global principles that promotes integrity, fairness, and order in digital asset markets . . .
I don’t have any opinions about what crypto regulations might be good or bad, or how much is optimal. The point I’m trying to convey is that SBF did and he was quite deliberate about getting something for his money. I’m also not trying to suggest that SBF was uniquely bad: quite the opposite. This is standard operating procedure.
Beyond political donations, SBF was an active donor to media outlets who were likely to cover his business. What’s that—you didn’t know that media outlets take donations? I’ll wait a moment while you update some priors.
Here’s a rundown from Puck:
One has to wonder how newsrooms will respond to S.B.F. going forward. The then-billionaire hired a team of advisers who have been making investments in nonprofit and for-profit newsrooms over the last year, including in Vox, The Intercept, ProPublica, The Law and Justice Journalism Project, an international affairs podcast, and most prominently, Semafor.
Wait a minute . . . Vox? Didn’t Vox hire—
If you’re curious what kind of press you can buy, consider this fluff profile of SBF by Dylan Matthews. Is Matthews’ conveyance of admiration feigned? I doubt it. Would he have gotten it past his editors if SBF hadn’t donated to his employer? Maybe. Would SBF keep donating to Vox if they didn’t deliver fawning coverage? I’m sure Vox editors wondered this themselves right up to the point where it became clear that there would be no donations forthcoming regardless of what they printed.
What Does This Have to Do With EA
If I had one question for people who look at these other donations that were clearly instrumental in managing SBF’s public image and see nothing in common with SBF’s donations to EA causes—and can’t imagine how they might have played a role in whitewashing his reputation by taking his shilling and letting him associate himself with EA so publicly—well. It would actually be more of a comment.
SBF deliberately cultivated his image as an ascetic and a saint, and his vocal and public association with EA was instrumental in doing this. I’m not saying that EA orgs taking his money was wrong, given what they knew. But they weren’t accepting a donation; they were participating in a transaction, and what they were selling was their reputation.
Don’t just take it from me, though: here’s SBF yesterday evening in the DMs of one of the reporters whose salary he had, until recently, been contributing to.
Here is another history of EA from a person who would like to stay anonymous, edited for clarity. This was relayed to me months ago.
The first EAs did not have extreme wealth backing them; they were weird nerds who worked in tech and finance and gave away their income. Movement organization was bottom-up, and every single member had skin in the game around cause area prioritization because they were donating their own money.
. . .
Eventually, they started setting up dedicated research and charity organizations. These organizations were of course rational and realized that whaling was a much better strategy than trying to fund their work with 30% of weird nerds’ income.
Along the way, lots of EAs went to work in crypto and got rich in the process. These lottery winners came back and began to define the movement. They now fund the majority of cause areas and orgs; weird nerds are increasingly marginalized as things optimize for legibility to large central funding sources. At this point, the movement has died.
This really took off by 2018 or 2019. Since thing, EA has been a zombie movement that exists to launder the decision making of a certain kind of billionaire as the process of a distributed cognition.
SBF is a symptom, not a freak one-off. As long as EA is reliant on large and especially on public donors, EA is owned by those donors and will over time evolve to serve them even more than they already do. Even absent that, they’ve attracted the Eye of Sauron, and the native EA personnel will respond to the usual careerist incentives and adopt the norms of the neurotypicals who flood their staffs, or they will be forced out because they’re just too hard to work with.
Eventually, like everything else, EA will become a skinsuit shambling around and doing everything awful that the old charities do, and have nothing to do with its original goals. This is the telos of just about every institution in the United States that runs on donations, and at this point it is entirely predictable.
Anyway, Metaculus has odds of another megadonor appearing by 2026 at 71%, even after SBF. I hope the next one goes better.
A Path to Salvation
It’s rude to write about the failures of others without proposing a remedy. So, here is my proposal for saving Effective Altruism as an independent and less-corrupted institution: Jesus.
No, I don’t mean you have to believe in God if you don’t want to. Don’t be ridiculous! You can even keep your old cause areas, although I do wish you would consider some alternatives.
Instead, I’m suggesting you put a piece of Gospel advice about charitable giving into practice:
Take heed that ye do not your alms before men, to be seen of them: otherwise ye have no reward of your Father which is in heaven.
Therefore when thou doest thine alms, do not sound a trumpet before thee, as the hypocrites do in the synagogues and in the streets, that they may have glory of men. Verily I say unto you, They have their reward.
But when thou doest alms, let not thy left hand know what thy right hand doeth:
That thine alms may be in secret: and thy Father which seeth in secret himself shall reward thee openly.
Uh. I’m Jewish
No problem. Kitzur Shulchan Aruch 34:
You should take care to give charity as discreetly as possible. And if it is possible to give in such a way that you do not know to whom you are giving, and the poor person does not know from whom he is receiving, that is most commendable. In any event do not glorify yourself with the charity you give . . .
Ok So First Of All Actually That Isn’t A Policy
It could be.
What I’m suggesting is that it’s much harder to influence an organization if donors can’t credibly demonstrate that they donated to the portion of the organization that determines priorities. Regulations of groups may make this challenging to implement, but here are some rough ideas:
Institute a policy mandating anonymous donations so that you don’t know where your funding is coming from. It may be helpful to use a receiving organization that’s aggressively organizationally segregated from the main EA charity.
What if someone attempts to reveal themselves as a source of donations? Create in advance a policy obligating you to return the money, and blacklist the donor from future donations.
Only permit infrequent, perhaps quarterly, pass-through of funds to a charity from the receiving organization, to reduce the extent to which charities can be influenced in their behavior by real-time reactions to decisions they make.
At a bare minimum, be more protective of your branding and do what you can to stop donors from appropriating it.
This class of ideas is the most direct and easiest to implement. Unfortunately, they don’t solve all cases.
Social attacks like revolving door hiring and private communication are very difficult to defend against. There may still be some ways to handle such problems, albeit at a high cost. I am less optimistic about the practicality the following policies, but they might be considered:
Pseudonymous participation in administration of charities to prevent outsiders from knowing who is running the charities, except in cases of malfeasance where of course administrators would be expelled.
Prohibitions on allowing people with histories as lobbyists, journalists, and similar to sit on EA decision committees.
Prohibitions on donations to charities that hire EA employees.
Decision-making within charities conducted by large voting bodies and randomized assignment, to make attempts at direct influence of decision-makers more cumbersome.
I’m sure people with more practical experience in administrating charities could imagine additional steps that might be taken. The common theme is hardening yourself against outside influence by reducing your attack surfaces and minimizing the value of your organization to donors who are giving for reasons other than maximizing the stated goals of your organization.
But What If No One Will Donate If They Can’t Influence Us, or Generate Flattering Press About Having Donated. And What If No One Cares Anymore When We Talk About This At Parties
Then you will have learned something valuable about the role of charity organizations in the world.
Great thanks to @orthonormalist in particular for his helpful criticisms and other feedback; my wife for her review and her support in all things; and to other commentators who wished to remain anonymous. All remaining errors are mine.
In these roles I generated large and quantifiable results for my employers. I’m still under NDA, though, as far as I know; and it’s rude to talk. Take it or leave it.
About a year after founding the organization that became GiveWell, the founders were demoted for using sock puppets to generate traffic to the site. Whoops.
@JoyOptimizer points out that this program was spun off from EA in 2021. I don’t think this undermines my point—yes it was spun off, after it stopped being trendy, after running for six years! But I’m happy to flag this.
I should say explicitly that I don’t have anything against David Shor personally. We’ve had pleasant interactions on Twitter and I was genuinely impressed by his fortitude when he was mobbed a year or two ago by others in his party. He’s a useful representative of a class of operator that I’m criticizing, but I don’t want to convey in any way that’s he’s particularly bad. If I had to guess I would say he’s among the best of his tribe.
I’m as shocked as you are that I’m linking Jacobin but this is really a great breakdown of SBF’s political patronage.
This was the last paragraph of this piece that I wrote. Modulo minor rewrites, the entire remainder of the piece was written before I became aware of this interview. I feel profoundly validated by this.
I think this is a good piece overall but:
"...weird nerds are increasingly marginalized as things optimize for legibility to large central funding sources. At this point, the movement has died....This really took off by 2018 or 2019. Since thing, EA has been a zombie movement that exists to launder the decision making of a certain kind of billionaire as the process of a distributed cognition."
just strikes me as sort of perfectly anti-correct. In many ways, EA is laundering weird ideas /into/ respectability. This is a totally conscious strategy, there are probably dozens of EA forum posts implicitly saying this.
This is Good. Laundering weird ideas into respectability is the basis of large-scale values change. If only weird nerds hold ideas, they have an intrinsically limited influence.
Mass persuasion (eg, success for EAs) basically looks like this.
And your suggestions: "Prohibitions on allowing people with histories as lobbyists, journalists, and similar to sit on EA decision committees." are basically saying [don't engage with powerful mainstream people], which is fine if you have some magical source of power that doesn't rely on already extant elites, but dumb if you. Some tech elites (Balaji, Elon, etc.) can effectively do this, but they don't work in policy, and this is a sustainable strategy for a whole movement.
The short course, in my mind, is my rule "Never give money to big charities", but this assumes the reason to give to charities is to accomplish the aims you intended rather than as a tax writeoff.
Home run after home run, Robot.