Effective Altruism (EA) is a community focused on donating money to create the greatest good in the world. This is mostly (?) unobjectionable–but there’s problems. The EA community has a number of philosophical viewpoints that most external observers would consider absurd, and which materially affect their donating behavior.
In particular, many people in EA believe that the most efficient way to create the greatest good is by preventing extinction caused by AI. EA surveys suggest that about 18% of community members donate to “Long term & AI” causes, compared to 62% that donate for global health & development. Clearly concern about AI is not a unanimous viewpoint in EA, but you have to imagine the kind of community where everyone takes it seriously.
EA has been under the spotlight in current news because Sam Bankman-Fried–recently arrested for massive fraud at his cryptocurrency exchange FTX–was a vocal proponent of EA. More than a vocal proponent, he founded the FTX Future Fund, which committed $160M in charitable grants to various EA causes. At the top of Future Fund’s cause list? AI.
Although I’m critical of EA, I actually think it’s a bit unfair to pretend that they’re directly responsible for SBF’s fraudulent behavior. Instead I wanted to focus on some of SBF’s philosophical views, which are shared by at least some parts of the EA community. Specifically, let’s talk about the idea that charitable causes are risk-neutral.