As readers know, I like to take retrospective looks at the New Atheist movement. What can I say, I was involved for ten years and I have grievances. But there’s another adjacent community I think a lot about, even though I was never personally involved: the Rationalist community, also known as the LessWrong community. I also think about Effective Altruism (EA), a significant spinoff community that focused on philanthropy.
I always had issues with the Rationalist community, as well as personal reasons to keep my distance. But looking back, I honestly feel like Rationalism left a better legacy than either the Skeptical or New Atheist movements did, and that legacy came in the form of EA. I keep my distance away from EA, but at the end of the day they’re doing philanthropy, and encouraging others to do philanthropy, and I really can’t find fault with that.
To understand the Rationalist community and its history, I highly recommend the RationalWiki article on LessWrong. (Despite the name, the RationalWiki is unaffiliated, and not entirely sympathetic.) What follows is more of a personal history.
The Rationalist community has been on the periphery of my awareness for as long as I’ve been blogging. I liked to write about critical thinking and LessWrong was one of those critical thinking websites. I later learned LessWrong had a distinctive community, with extensive jargon, and numerous idiosyncratic viewpoints. A lot of these eccentricities can be traced to Eliezer Yudkowsky, an autodidact blogger and charismatic leader. Yudkowsky wasn’t just interested in critical thinking, he was also interested in transhumanism and AI. Yudkowsky’s “Sequences” served as a foundational text for Rationalists.
Before you accuse Rationalism of being a cult of personality, you should know that if it ever was, then it isn’t anymore. Around 2013 or so, there was the LessWrong Diaspora, and most of the community dispersed and diversified. The closest thing to a central blog is Slate Star Codex, but it does not have nearly as much sway as Yudkowsky once did. Among other things, some Rationalists went to Tumblr and got into social justice–you might even like some of them without realizing it.
I kind of hate Rationalism. Some of my reasons are substantial, and some are more petty and personal. For example, I hate that half of their jargon was built from sci-fi references. I don’t care for sci-fi, and have negative experiences with sci-fi geekery. I am not interested in their preoccupation with arguing about AI. They are not as good at critical thinking as they believe they are, which is just a perpetual problem for any community that tries to teach itself critical thinking. And I disagree with their philosophy of trying to calmly argue with bad people, while tolerating them within their spaces. For example the alt-right Neoreactionary movement orbited them for years.
That said, I’ve read enough Rationalist stuff that my outsider status could be questioned. I never read the Sequences, but I read Yudkowsky’s other major work, Harry Potter and the Methods of Rationality (and wrote a review). I read the blogs Thing of Things and The Uncredible Hallq (the latter now defunct). I read the entirety of this Decision Theory FAQ. And I know rather more about Roko’s Basilisk than anyone really ought to know.
I don’t know how EA split off from Rationalism. I suspect it emerged from the Rationalist ethical philosophy–basically utilitarianism, except that they go to great lengths to find every single bullet, so they can bite them. One bullet that Rationalists have been more reluctant to bite, is the idea that maybe instead of enjoying one’s own wealth, one should give it to a good cause. EA, on the other hand, took that particular bullet and ran with it. They won’t give up all of their personal wealth, but they’re interested in giving what they can while maintaining a community that doesn’t immediately scare everyone away.
If a EA person ever asks me why I keep my distance, I just tell them that I have low scrupulosity. “Scrupulosity” is a concept within Rationalism/EA, describing the personality trait of having a very strict conscience. (ETA: This is not correct, see comments.) When it comes to doing the right thing, scrupulous people are optimizers rather than satisficers. They worry all the time that even if they’re doing the right thing, they’re not doing the best thing. I am definitely not like that–if you persuaded me that I was not doing the best I could possibly do, I’d just shrug and move on. I learned about scrupulosity on Thing of Things, and I feel it goes a long way towards explaining the weirdness of EA and why it’s not for me. EA is not just preoccupied with giving to a good cause, but giving to the most efficient good cause. And I’m just not interested in that sort of optimization.
There are three major domains that EA people argue about: global poverty, animal welfare, and existential risk (i.e. the risk of human extinction, aka ex-risk). To outsiders, ex-risk is the most outlandish of the three, and it only gets stranger when you realize that the risk they’re most concerned about is the risk of a malevolent AI takeover. But if you understand that EA still overlaps significantly with Rationalism, which is also preoccupied with AI, it’s not that surprising.
Back in 2015, Dylan Matthews wrote an article about his experience at Effective Altruism Global, and his concerns about the presence of AI ex-risk. It covers a lot of the counterarguments: a) From afar, it looks a lot like tech people persuading themselves that the best way to donate money is to tech research. b) It’s based on speculation about an astronomically low-probability event with catastrophic consequences, and how much do we really understand about such a rare event? I have this crazy notion that putting money into AI research has a tiny probability of making the problem worse, actually. c) It veers into Repugnant Conclusion territory. I will also add d) it implies immense value being placed on unborn people.
To be clear, I’m not against funding AI research, it just seems dubious whether that really belongs in the philanthropy category.
I don’t like how much time and money they give to AI risk, but at the end of the day, even excluding ex-risk causes, they’re still doing philanthropy. For comparison, look at the speedrunning folks at GDQ, they waste a lot of time and money on something that is objectively useless, but they’re also doing big charity fundraisers, which is more than I can say for most hobbies. What I’m trying to say is that even if some money is considered to be completely wasted on idiosyncratic interests, the project as a whole seems praiseworthy.
Since I got jobbed this year, I too have undeserved wealth, and I’m starting to think about where it could be donated. Although I only pay attention to EA tangentially, some of their values have rubbed off on me. I give GiveWell‘s top charities serious consideration, and I appreciate the aesthetic of the Giving What We Can pledge. I appreciate that EA made me think about these topics, when I’m not really the kind of person who would think about them otherwise.