To hear effective altruists explain it, it comes down to simple math. About 108 billion people have lived to date, but if humanity lasts another 50 million years, and current trends hold, the total number of humans who will ever live is more like 3 quadrillion. Humans living during or before 2015 would thus make up only 0.0036 percent of all humans ever.
The numbers get even bigger when you consider — as X-risk advocates are wont to do — the possibility of interstellar travel. Nick Bostrom — the Oxford philosopher who popularized the concept of existential risk — estimates that about 10^54 human life-years (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers.
Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”
Put another way: The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people. That argues, in the judgment of Bostrom and others, for prioritizing efforts to prevent human extinction above other endeavors. This is what X-risk obsessives mean when they claim ending world poverty would be a “rounding error.”
Wait. Turn those numbers around. If they want to save 1052 future people, and there are roughly 1010 people living now, doesn’t that mean that each child is the potential progenitor for 1042 hypothetical, potential, future human beings? And that if we’re really taking the long view, with math, we should regard every child dead of malaria as the tragic, catastrophic death of 1042 Futurians?
No, not at all. That would require a collection of Silicon Valley millionaires and billionaires to think about an immediate problem, rather than ignoring pressing concerns to focus entirely on imaginary, unpredictable futures. So forget malaria, or coastal flooding, or environmental degradation — we need to deal with the rogue artificial intelligences.
What was most concerning was the vehemence with which AI worriers asserted the cause’s priority over other cause areas. For one thing, we have such profound uncertainty about AI — whether general intelligence is even possible, whether intelligence is really all a computer needs to take over society, whether artificial intelligence will have an independent will and agency the way humans do or whether it’ll just remain a tool, what it would mean to develop a “friendly” versus “malevolent” AI — that it’s hard to think of ways to tackle this problem today other than doing more AI research, which itself might increase the likelihood of the very apocalypse this camp frets over.
The common response I got to this was, “Yes, sure, but even if there’s a very, very, very small likelihood of us decreasing AI risk, that still trumps global poverty, because infinitesimally increasing the odds that 10^52 people in the future exist saves way more lives than poverty reduction ever could.”
AIs of the nature that concerns them don’t exist. This is an imaginary problem. There are good reasons to think it’s an overblown concern, and that it is an unpredictable issue that is unlikely to develop in an expected direction (which does not mean it’s safe, of course, but that building contingencies now to deal with situations that don’t exist and are likely to be completely different than you anticipate is really stupid). You could be building a Maginot line against one kind of threat and then discover that the AIs don’t really respect Belgian independence after all.
So rich people are throwing tens of millions of dollars at institutes making plans on how to fight the Killer Robots of the Future, rather than on real and immediate concerns, and they’re calling that Effective Altruism.
I call it madness. But it’s great profit for the prophets of AI, at least.