Risk neutrality in EA


Effective Altruism (EA) is a community focused on donating money to create the greatest good in the world. This is mostly (?) unobjectionable–but there’s problems. The EA community has a number of philosophical viewpoints that most external observers would consider absurd, and which materially affect their donating behavior.

In particular, many people in EA believe that the most efficient way to create the greatest good is by preventing extinction caused by AI. EA surveys suggest that about 18% of community members donate to “Long term & AI” causes, compared to 62% that donate for global health & development. Clearly concern about AI is not a unanimous viewpoint in EA, but you have to imagine the kind of community where everyone takes it seriously.

EA has been under the spotlight in current news because Sam Bankman-Fried–recently arrested for massive fraud at his cryptocurrency exchange FTX–was a vocal proponent of EA. More than a vocal proponent, he founded the FTX Future Fund, which committed $160M in charitable grants to various EA causes. At the top of Future Fund’s cause list? AI.

Although I’m critical of EA, I actually think it’s a bit unfair to pretend that they’re directly responsible for SBF’s fraudulent behavior. Instead I wanted to focus on some of SBF’s philosophical views, which are shared by at least some parts of the EA community. Specifically, let’s talk about the idea that charitable causes are risk-neutral.


What is risk neutrality?

Risk neutrality is an investment term that describes indifference to risk. If you offer someone a bet that either wins or loses a dollar with 50/50 probability, then a risk-averse person will avoid the bet, a risk-seeking person will take the bet, and a risk-neutral person will be indifferent either way.

Generally, risk-aversion is rationally justifiable, because of the decreasing marginal utility of the dollar. The first dollar you earn might go towards something absolutely essential, such as food to survive. The hundred-thousandth dollar likely gets allocated towards something less essential, such as a summer vacation. You probably don’t want to risk your survival just for the chance at a summer vacation.

Another way to look at it is that you might be risk-neutral with regard to some underlying value, such as your own happiness and well-being, but this translates into risk-aversion with respect to dollars. Being risk-neutral with respect to wellbeing is fairly reasonable. Being risk-neutral with respect to dollars is completely bonkers and dangerous if taken to its logical conclusions.

SBF’s views of risk neutrality

This post is largely inspired by an article by Sarah Constantin titled “Why Infinite Coin-Flipping is Bad“, which already says what I want to say on the topic, but bears repeating. Constantin includes quotes from interviews with SBF, in which he espouses a belief in risk neutrality, at least when it comes to EA donations.

As an individual, to make a bet where it’s like, “I’m going to gamble my $10 billion and either get $20 billion or $0, with equal probability” would be madness. But from an altruistic point of view, it’s not so crazy. Maybe that’s an even bet, but you should be much more open to making radical gambles like that.

Of course, it’s possible to espouse a belief in risk-neutrality while being mistaken about its logical consequences, basically arriving at reasonable behavior by accident. To some extent, I think this is what’s happening.

For example, if you’re truly risk-neutral with respect to charitable causes, why would you ever donate to more than one cause? If you truly believe charitable causes produce value that is completely linear to the amount of money they receive, then you should just find the charity with the greatest expected value, and donate all the money to that. Sure, there’s correlated risk because you’re putting all your eggs in one basket, and what happens if AI research doesn’t go on to save the human race? But if you’re risk-neutral, you don’t care about correlated risk, you’re indifferent to it. FTX Future Fund clearly donated to multiple causes, and that’s not consistent with SBF’s belief in risk-neutrality.

On the other hand, SBF understood at least some of the consequences of risk-neutrality, apparently advocating overbetting the Kelly Criterion and taking the St. Petersburg bet.

Those are some words I just said, which Constantin explains in more detail. The Kelly Criterion is an investment principle that says you should bet in a way that maximizes the expected logarithm of your money. The Kelly criterion applies in the context of iterated bets with positive average payoff. If you’re risk-neutral, you can beat the Kelly Criterion by repeatedly betting all of your money. But this is only “better” in the sense that there is a vanishing probability of winning an exorbitant amount of money that somehow makes up for the vastly more likely outcome of losing everything.

Does SBF believe in making bets with near guaranteed failure and a vanishing probability of exorbitant reward? This sort of bet is what we call the St. Petersburg paradox, and yes SBF explicitly thinks that’s a good idea. It is not a good idea.

EA beliefs about risk-neutrality

Within EA, some people argue that charitable causes are risk-neutral. For example, one article on 80,000 Hours argues:

From a personal perspective, it makes sense to be risk-averse about most goals. Having ten times as much money won’t make you ten times happier, so it doesn’t make sense to bet everything on a 10% chance of increasing your income ten-fold.

However, if your aim is to do good, helping ten people is roughly ten times as good as helping one person, so it can make more sense to take high-risk, high-reward options.

The EA Forum offers a more measured perspective:

However, it seems that altruists should be close to risk-neutral in the economic sense. Though there may be some diminishing returns to altruistic effort, the returns diminish much more slowly than e.g. the marginal personal utility of money does. This means that the reasons for economic risk-aversion that apply in standard economic theory apply less strongly for altruists.

These arguments aren’t necessarily wrong. The risk appetite of the individual may be different from the risk appetite of the charity organizations that they choose to donate to.

On the other hand, this narrative of “helping ten people is ten times as good as helping one person” contains a lot of assumptions. It assumes that charity comes in discrete quantities of number of people helped. What about the quantity of resources devoted to each person? And generally, poor people generally have lower risk tolerance than wealthy donors, so you could argue that charity organizations that help them also have a lower risk tolerance.

And that’s all assuming that the purpose of the charity is even to help people. Some EA causes aren’t about that, they’re about preventing an AI apocalypse. Is that risk-neutral as well? Doesn’t research funding usually suffer from diminishing returns?

By the way, remember when I said that risk-neutrality implies that all money should just be donated to a single cause? I didn’t get that from nowhere. That’s a common argument within EA. Apparently Robin Hanson had been arguing this all the way back in 2013.  More recently, someone wrote an article for the EA forum explaining both sides of the argument. I’m glad it’s at least controversial.

Knowing EA, there are probably more detailed arguments on the subject of risk-neutrality somewhere in the annals of EA forums, and what can I say, I don’t have the time or interest to engage with that. However, my intuition is that the risk appetite of charity likely varies from cause to cause, and these popular generalizations about risk neutrality are dubious and potentially harmful.

Comments

  1. says

    ok, i have no idea what most of this means. is there any way to simplify this a wee bit harder for non-math, non-investment, non-EA-insider people? thanks to hanging out in the atheoskeptisphere since the scienceblogs days, i’m ok with just accepting some shit will remain permanently beyond me, but i have a feeling there might be another way to parse this that would work. it’s not like quantum physics that way.

  2. says

    @GAS,
    It helps if you can be specific about which parts are confusing.

    The TL;DR version is that some people in EA believe in taking risks on behalf of charities, far above and beyond conventional financial wisdom. There may be some justification for this, but the justification they provide is bad. Sam Bankman-Fried took this to its extreme, apparently advocating that we take double-or-nothing bets.

  3. says

    yeah, i guess “investing” in charity vs. “giving” to charity might be part of my issue. are charities publicly traded? i would think, aside from the tax write-off, everything you give to charity would be automatically with no thought of return.

  4. says

    @GAS,
    That’s a good question. There are several ways you can “invest” in charity. For example, you could create an investment account, and commit the earnings of that account to charity. Or you could choose a charity organization that does this on their own. Or you could choose a charity that just contains some inherent risk, such as an environmental organization whose impact is unknown. Or to AI research.

  5. says

    ugh. i don’t trust stonks as far as i can kick them. they say u need an investment-based retirement of some kind bc inflation will keep happening and making the pension worth less, so i have that, but i’d rather not. certainly if i was running a charity i’d feel weird about involving that shit. if some donor was doing that with the source of donation, gives the org less risk, but still. the whole financial system as currently configured is way overdue for total collapse, isn’t it?

  6. says

    I mean, I wasn’t going to argue that it’s fundamentally wrong to invest in charity. I’m not sure what the argument for that would be, outside of saying that investment is complicity in capitalism.

  7. says

    I was only saying it doesn’t seem safe, although the argument investment is morally wrong is easy to make. Unless you’ve got the sauce to micromanage your investments, it is about certain that any amount of investment money will be spent on corrupt and exploitative practices. Which is funny for regular poor people like myself with a smol stake, because we’re the people most screwed by those practices. For example, if some of it is used for the property management companies that own all the apartments I can live in and make them unaffordable. I’m incentivized to participate in my own degradation. Love it.

    I don’t think about charity much, but this has me looking at it sideways again. When the rich reap sweet tax breaks by donating to charity, they could be donating to extremely bullshit charities, couldn’t they? This AI research for example. I’ve heard both longing for and fear of advanced AI coming from tech weirdos, so I imagine they’re trying to get neural emulation up to where they can “upload” consciousness while figuring out a way to avoid creating Skynet, right? Both of those aims are nonsense, but if I wanted to fork over all my disposable income for them, fattening the wallets of some unscrupulous asshats, the US government would reward me for that.

  8. says

    Yeah, but you could already say that about charity donations even without talking about investment. We make our money from our jobs, and our jobs involve some degree of exploitation.

    On a related note, another frequent criticism of EA is their advocacy for “earning to give”. That means trying to maximize your salary so you can donate as much as possible. In a way this isn’t all that different from the usual way people donate to charity–they usually donate part of their salary to charity, and most people don’t consider it morally obligatory to quit your day job in order to work directly for the charity organization. But some people in EA may take it to extremes, going out of their way to choose a career that maximizes donation potential.

    The common criticism of earning to give is that many of these high-paying jobs are also likely to be the most exploitative. On the other hand, if you didn’t do the exploiting, someone else would, etc. etc. I have mixed feelings about the argument, but I would personally consider running a cryptocurrency exchange to be unacceptably exploitative.

  9. says

    Wow I can’t believe I’ve never heard of the Kelly criterion or St. Petersburg paradox. Very interesting 🙂

    EA is really interesting to me, but also seems to have a lot of weird quirks…

Leave a Reply

Your email address will not be published. Required fields are marked *