Except we don’t know what random consequences that would have. After all, Letters from a Birmingham Jail and Mein Kampf were both produced by prisoners. I think that’s also the fatal flaw in long-termist thinking — you can’t plan as if every action has a predictable consequence. It is extreme hubris to think you can derive the future from pure logical thinking.
But that’s what long-termists do. If you don’t know what “long-termism” is, Phil Torres explains it here.
In brief, the longtermists claim that if humanity can survive the next few centuries and successfully colonize outer space, the number of people who could exist in the future is absolutely enormous. According to the “father of Longtermism,” Nick Bostrom, there could be something like 10^58 human beings in the future, although most of them would be living “happy lives” inside vast computer simulations powered by nanotechnological systems designed to capture all or most of the energy output of stars. (Why Bostrom feels confident that all these people would be “happy” in their simulated lives is not clear. Maybe they would take digital Prozac or something?) Other longtermists, such as Hilary Greaves and Will MacAskill, calculate that there could be 10^45 happy people in computer simulations within our Milky Way galaxy alone. That’s a whole lot of people, and longtermists think you should be very impressed.
But here’s the point these people are making, in terms of present-day social policy: Let’s say you can do something today that positively affects just 0.000000000000000000000000000000000000000000001% of the 10^58 people who will be “living” at some point in the distant future. That means, mathematically, that you’d affect 10 trillion people. Now consider that there are roughly 8 billion people on the planet today. So the question is: If you want to do “the most good,” should you focus on helping people who are alive right now or these vast numbers of possible people living in computer simulations in the far future? The answer is, of course, that you should focus on these far-future digital beings.
Long-termism is, basically, philosophy for over-confident idiots. Otto would love it, if you know your A Fish Called Wanda memes. It takes a certain kind of egotistical certainty that you personally know all the very best choices which will shape all of the future, and that you know exactly how the future is going to be.
It’s a good article. Only one note struck me as jarringly false.
Nonetheless, Elon Musk sees himself as a leading philanthropist. “SpaceX, Tesla, Neuralink, The Boring Company are philanthropy,” he insists. “If you say philanthropy is love of humanity, they are philanthropy.” How so?
The only answer that makes sense comes from a worldview that I have elsewhere described as “one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about.” I am referring to longtermism. This originated in Silicon Valley and at the elite British universities of Oxford and Cambridge, and has a large following within the so-called LessWrong or Rationalist community, whose most high-profile member is Peter Thiel, the billionaire entrepreneur and Trump supporter.
That’s not the only answer that makes sense. Another, simpler answer that comes from an understanding that proponents of long-termism are naive twits is that Musk is a narcissistic moron who operates on the basis of selfish whims, and that idea is better supported by the evidence of his behavior. It’s looking at it through the wrong lens. It’s not that LessWrong promotes an influential, principled philosophy, it’s that unprincipled, arrogant dopes find it a nice post hoc rationalization for what they choose to do.












