I’m fairly well off these days. Between having a frugal upbringing, and being a tech worker married to another tech worker with no kids or debt, I think life has obviously been unfair in my favor. I want to give some of it away. For these reasons, I think a lot about the effective altruism (EA) movement, albeit as an outsider.
Most of the stuff I say about EA is fairly critical (and there’s more to come!), but I try to be measured in my criticism, because I don’t think it’s all bad. Compared to a lot of stuff PZ Myers says, I’m practically a supporter. In this article, I offer a begrudging and measured defense.
EA corrects traditional charity
Recently I watched a (very long) video by We Live in Hell about the problems with the WE charity, and voluntourism more broadly. The WE charity appears to prey on the naivety of young teens by exaggerating the impact they can have, and overworking them. A lot of resources are wasted sending volunteers to impoverished locations, setting them up in fancy houses (presumably built through the expenditure of local resources), so they can pretend to contribute to construction projects and pretend to immerse themselves in the culture. This exemplifies the problems with traditional charity.
Traditionally, many people select charities based on some sort of sentimental value, or personal connection. This has numerous issues, such as favoring charities that feel good on the surface, and have strong marketing. Often the real impact may be much smaller than it appears.
Besides, if you think that it’s better to choose charities based on some sort of personal connection, ask yourself: of the techies and billionaires that are said to form the base of effective altruism, what do you believe they have a personal connection to? Probably nothing good! They’d probably donate to horrible things like robots and crypto-exchanges (I mean, they still do, more on that later). Relying on critical thinking isn’t perfect, but it is an improvement over relying on the sentimental value of the people with the money.
And if your objection is that charity only serves to alleviate the harms of the present, without fighting for systemic change, I think that’s something that EA recognizes. There are actually a lot of cause areas within EA that fight for systemic change, for the future. Only problem is, the most visible and popular cause areas oriented towards the future are ones that we tend to disagree with.
Longtermism isn’t inherently wrong
On a high-level, EA tends to be interested in three cause areas: global poverty, animal suffering, and longtermism. Longtermism is EA’s vision of how to work towards the future. However, in practice, longtermism is mostly a euphemism for working to prevent human extinction. The older euphemism was “existential risk”, abbreviated “ex-risk”.
Longtermism has a poor reputation (particularly recently, with its association with crypto scammer Sam Bankman-Fried). I tend to think this is well-deserved. In fact, my primary defense of EA is not to defend longtermism, but to point out that EA has surveys on how many people support different types of causes. “Longtermism and AI” is only supported by about 18%, compared to “Global Health + Development” at 62%, and “Animal Welfare” at 27%. Even if we totally write longtermism off as a waste, I still think they come out ahead of, say, funding voluntourism.
The largest area within longtermism is AI research, aimed at preventing AI from destroying us all. That really does seem like a waste of money to me. Granted, I’m not against funding research in general, and as large language models like chatGPT enter the public consciousness, more people might be feeling that AI is a serious concern. However, EA is primarily interested in avoiding extinction, which they think could be caused by the “control” or “alignment” problem–when the robots won’t do what we want them to do. Meanwhile, most people are worried that AI will do what powerful people want, to the detriment of the rest of us. In other words, funding AI research to prevent the AI apocalypse is as strange an idea as funding nuclear research to prevent nuclear war.
On the other hand, some of the other longtermism causes actually make sense. I mean, climate change, we all agree that’s important. Pandemic preparedness also strikes me as valuable. I think with the prominence of AI research within longtermism, that makes longtermism look bad. But if you dig a bit deeper, you realize that it’s not that longtermism is categorically bad, it’s just a category with some bad things in it.
I think the core problem with longtermism is that it’s essentially funding scientific research as if it were charitable giving. That’s just encouraging a bunch of non-experts to take control of a research agenda, and no wonder they make some dubious choices.
Earning to give is okay actually
This may be the most controversial part of my essay.
Traditionally, people don’t really organize their lives around optimizing their charitable impact. I do not work in tech because I want to maximize donations—it just so happens that I am very good at what I do. Nobody has ever told me that I am morally obligated to quit my job and go work for an NGO. We as a society are not comfortable making such extreme moral demands of private individuals.
EA is different. They believe in taking utilitarianism to its logical, horrible, ridiculous conclusion. If it truly is better for society that I quit my job and work for an NGO… maybe I should actually do it? Nobody is going to loudly demand that I do it, but EA values demand that I at least consider it. The EA community has considered the proposition, and reached the consensus “Maybe it’s not good for all of us to quit our tech jobs and work for charitable orgs.” After all, a lot of those orgs are in greater need of money than labor. And that’s earning to give.
Earning to give might be weird, but its main effect is to cancel out the weirdness of extreme utilitarianism. It allows EA folks to basically reach the same conclusion that normies reached by not thinking about it that hard.
Earning to give and extreme utilitarianism don’t cancel out completely. The earning to give argument suggests that people actually go out of their way to find high-paying jobs. But, I suspect that’s what a lot of people wanted to do with their lives anyway. The part I have a problem with is when they decide that the most altruistic thing to do is become a crypto scammer, that’s obviously indefensible.
In any case, earning to give is a minority practice in EA. According to their survey, about 23% of non-students say they’re in an earning-to-give career path, whatever that means. I’ve also known some people to go in the other direction, actually quitting their tech jobs.
EA is a perpetual fundraiser
It’s been observed that basically every ethical philosophy agrees that it is good to give money to the people on the street who need it, but people still generally don’t do it. Instead, to persuade people to give money away, we have all these fundraisers. People run marathons, hold baking sales, ask for donations on their birthday, speedrun video games, and so on. People generally already believe that donating money to charity is good, but fundraisers still seem to be effective and necessary to convert that belief into action.
There are a lot of things that could be said about EA and the EA community. But on the most basic level, EA is a form of fundraising. It is a community of people that perpetually discusses charities. That means they’re perpetually bringing charities to people’s attention, and reminding them to actually donate to charities. While, yes, EA discusses some strange philosophical ideas, the bigger picture is that participants find these subjects interesting, and their continued discussion serves to further perpetuate the fundraiser. It’s a philosophy marathon.
Fundraisers also serve the purpose of vetting charities. Even if you don’t know anything about a charity, if people are running a marathon for it, you can be confident they did the research. Marathons are a reliable signal (in the game theory sense) that the runners and marathon organizers believe it’s a worthwhile cause. EA vets its charities more openly and directly, by having a public conversation about it.
I may disagree with some of the conclusions of EA discussion, and I’m not interested in participating in EA communities myself. But still, to a first approximation, EA is a fundraiser. Fundraisers can be subject to a lot of criticism, but I don’t think this one is bad to have around.
klatu says
What you call “fundraising”, a sane society would call “taxes”.
IOW: You don’t get to play hero for electing to pay your taxes.
(Please tell me that you don’t actually buy into any of this EA crap.)
Siggy says
@klatu,
Fundraising and taxes are quite different and not mutually exclusive, so I don’t know what you mean. If you mean that taxes ought to fulfill the role that charity is fulfilling in this context, yeah, maybe! I didn’t say anything against taxes. I do not know what EA people believe about taxes.
I have a more critical post on EA drafted and scheduled in a few days.
klatu says
To be fair, I never said we lived in a sane society. What I meant is that the problem charity is ostensibly solving is a problem brought on by a systemically entrenched lack of justice or human dignity. Charity exists as a stop-gap fix to a failure in the system itself. A “sane” society would not require charity because it already provides those things for everyone, as a matter of course.
klatu says
Which says nothing about the true intentions about EA proponent. I get that.
I’ll await your next post, Siggy.
(But I’ll be honest: I get really racist dog-whistles and just generally eliminationist vibes from EA.)
Siggy says
To my understanding, EA has a mix of political views, from technolibertarians to social justice progressives. And they tend to play nice with each other in a way I personally consider offputting.
I think you’re right to spot racist dogwhistles. For example, back in 2021 someone showed e-mails in which Scott Alexander was explicitly supporting HBD/scientific racism. However, Scott Alexander is more of a leader in the adjacent Rationalist movement rather than EA per se. He represents an element of EA, but not necessarily the whole thing.
Raging Bee says
Earning to give and extreme utilitarianism don’t cancel out completely. The earning to give argument suggests that people actually go out of their way to find high-paying jobs.
Following onto klatu’s comments, progressive taxation — and the use of tax revenues to help the neediest in ways decided by public consensus — extends the “earning to give” idea across all of a country, instead of relying only on those who choose to give. Everyone tries to get the best-paying jobs they can find, and whoever succeeds most gets taxed at higher rates to fund various programs to assist whoever succeeds least. Or to put it more bluntly, it’s state power deployed to afflict the comfortable in order to comfort the afflicted.
Perfect Number says
I am very fascinated by the EA movement because it’s about using *math* to figure out how to give to charity most effectively, which is what we *should* be doing, instead of giving because of emotional impulses. The thing is, though, EA starts with some assumptions that I would not start with, so then their *math* takes those to weird conclusions sometimes.
I like the idea of analyzing the actual results from different charities to judge whether they are effective. (But they have to make A LOT of assumptions to turn those ideas into numbers, so you have to know that you can’t take the results too literally- but I kind of get the impression that EA does take them pretty literally… oh dear…)
And also the idea of being really intentional about how much money to donate, and having that be a major consideration when planning one’s finances, rather than something to just randomly do when there happens to be a fundraiser or something- yes, this is really important, and people who have a high enough income really should all be thinking along those lines.
So those things are good, but beyond that, there seem to be a lot of *weird quirks* in EA.