The latest from Émile Torres focuses on how longtermists have effectively focused on PR and advertising. They have a truly odious philosophy, so they emphasize whatever element will get them the most money. The core of longtermism is the idea that in the far future there could hypothetically be many, many trillions of hypothetical “people” (who would mainly be artificial intelligences of some sort), and that therefore we should make any contemporary sacrifice we can to maximize the population of machines in the unimaginably distant future. There’s a lot of weebly-wobbly rationalizations to be made since nobody has any idea what strategies now will lead to conquest of the galaxy for human-made computers in some imaginary fantasy future, but somehow the current popular ones all involve sucking up to disgustingly rich people.
Ironically, it grew out of the goal of ending world poverty.
Longtermism emerged from a movement called “Effective Altruism” (EA), a male-dominated community of “super-hardcore do-gooders” (as they once called themselves tongue-in-cheek) based mostly in Oxford and the San Francisco Bay Area. Their initial focus was on alleviating global poverty, but over time a growing number of the movement’s members have shifted their research and activism toward ensuring that humanity, or our posthuman descendants, survive for millions, billions and even trillions of years into the future.
If you asked me, I would have thought that building a stable, equitable base would have been a sound way to project human destiny into an unknowable future, but hey, what do I know? The longtermists gazed into their crystal ball and decided that the best, and probably most lucrative, way to defend the future was to pander to the elites.
Although the longtermists do not, so far as I know, describe what they’re doing this way, we might identify two phases of spreading their ideology: Phase One involved infiltrating governments, encouraging people to pursue high-paying jobs to donate more for the cause and wooing billionaires like Elon Musk — and this has been wildly successful. Musk himself has described longtermism as “a close match for my philosophy.” Sam Bankman-Fried has made billions from cryptocurrencies to fund longtermist efforts. And longtermism is, according to a UN Dispatch article, “increasingly gaining traction around the United Nations and in foreign policy circles.”
After all, haven’t billionaires already proven that they will do their all to spread their wealth? OK, maybe the past is a poor guide, but once they’ve perfected brain uploading and have a colony of serfs on Mars, then they’ll decide to let the rest of us have a few crumbs.
The article is largely about one guy, MacAskill, who is the current Face of the movement. His entire career is one of lying to make his philosophy palatable to the masses, but especially delicious to wealthy donors. From day one he was shaping the movement as manufactured public relations.
But buyer beware: The EA community, including its longtermist offshoot, places a huge emphasis on marketing, public relations and “brand-management,” and hence one should be very cautious about how MacAskill and his longtermist colleagues present their views to the public.
As MacAskill notes in an article posted on the EA Forum, it was around 2011 that early members of the community began “to realize the importance of good marketing, and therefore [were] willing to put more time into things like choice of name.” The name they chose was of course “Effective Altruism,” which they picked by vote over alternatives like “Effective Utilitarian Community” and “Big Visions Network.” Without a catchy name, “the brand of effective altruism,” as MacAskill puts it, could struggle to attract customers and funding.
It’s a war of words, not meaning. The meaning is icky, so let’s plaster it over with some cosmetic language.
The point is that since longtermism is based on ideas that many people would no doubt find objectionable, the marketing question arises: how should the word “longtermism” be defined to maximize the ideology’s impact? In a 2019 post on the EA Forum, MacAskill wrote that “longtermism” could be defined “imprecisely” in several ways. On the one hand, it could mean “an ethical view that is particularly concerned with ensuring long-run outcomes go well.” On the other, it could mean “the view that long-run outcomes are the thing we should be most concerned about” (emphasis added).
The first definition is much weaker than the second, so while MacAskill initially proposed adopting the second definition (which he says he’s most “sympathetic” with and believes is “probably right”), he ended up favoring the first. The reason is that, in his words, “the first concept is intuitively attractive to a significant proportion of the wider public (including key decision-makers like policymakers and business leaders),” and “it seems that we’d achieve most of what we want to achieve if the wider public came to believe that ensuring the long-run future goes well is one important priority for the world, and took action on that basis.”
Yikes. I’m suddenly remembering all the atheist community’s struggling over the meaning of atheist: does it mean a lack of belief in gods, or does it mean they deny the existence of gods? So much hot air over that, and it was all meaningless splitting of hairs. I don’t give a fuck about what definition you use, and apparently that means I’m a terrible PR person, and that’s why New Atheism failed. I accept the blame. It failed because we didn’t attract enough billionaire donors, darn it.
At least we didn’t believe in a lot of evilly absurd bullshit behind closed doors that we had to hide from the public.
The importance of not putting people off the longtermist or EA brand is much-discussed among EAs — for example, on the EA Forum, which is not meant to be a public-facing platform, but rather a space where EAs can talk to each other. As mentioned above, EAs have endorsed a number of controversial ideas, such as working on Wall Street or even for petrochemical companies in order to earn more money and then give it away. Longtermism, too, is built around a controversial vision of the future in which humanity could radically enhance itself, colonize the universe and simulate unfathomable numbers of digital people in vast simulations running on planet-sized computers powered by Dyson swarms that harness most of the energy output of stars.
For most people, this vision is likely to come across as fantastical and bizarre, not to mention off-putting. In a world beset by wars, extreme weather events, mass migrations, collapsing ecosystems, species extinctions and so on, who cares how many digital people might exist a billion years from now? Longtermists have, therefore, been very careful about how much of this deep-future vision the general public sees.
The worst part of longtermist thinking is that what they’re imagining, in the long term, is a swarm of digital people — none of whom exist now, and which we don’t know how to create — is the population that our current efforts should be aimed at serving. Serving. That’s a word they avoid using, because it implies that right now, right here, we are the lesser people. Digital people is where it’s at.
According to MacAskill and his colleague, Hilary Greaves, there could be some 1045 digital people — conscious beings like you and I living in high-resolution virtual worlds — in the Milky Way galaxy alone. The more people who could exist in the future, the stronger the case for longtermism becomes, which is why longtermists are so obsessed with calculating how many people there could be within our future light cone.
They’ve already surpassed the Christians, some of whom argue that there are more than 100 million (100,000,000) angels
. The needs of the many outweigh the needs of the few, remember, so sacrifice now to make your more numerous betters.
You will also not be surprised to learn that the current goal is to simply grab lots and lots of money by converting rich people to longtermism — this is also how Christianity succeeded, by getting a grip on the powerful and wealthy. Underdogs don’t win, except by becoming the big dogs.
So the grift here, at least in part, is to use cold-blooded strategizing, marketing ploys and manipulation to build the movement by persuading high-profile figures to sign on, controlling how EAs interact with the media, conforming to social norms so as not to draw unwanted attention, concealing potentially off-putting aspects of their worldview and ultimately “maximizing the fraction of the world’s wealth controlled by longtermists.” This last aim is especially important since money — right now EA has a staggering $46.1 billion in committed funding — is what makes everything else possible. Indeed, EAs and longtermists often conclude their pitches for why their movement is exceedingly important with exhortations for people to donate to their own organizations.
One thing not discussed in this particular article is another skeevy element of this futurist nonsense. You aren’t donating your money to a faceless mob of digital people — it’s going to benefit you directly. There are many people who promote the idea that all you have to do is make to 2050, and science and technology will enable an entire generation to live forever. You can first build and then join the choir of digital people! Eternal life is yours if you join the right club! Which, by the way, is also part of the Christian advertising campaign. They’ve learned from the best grifters of all time.