The veil of gender ignorance

One of the common TERF talking points is, “If I grew up today, I would have (wrongly) believed I was trans.” As with many TERF arguments, there’s also an anti-ace analogue: “If I grew up today, I would have sooner believed I was bisexual heteroromantic than just gay.” I’m going to take an analytical approach to understanding and countering these arguments.

The veil of ignorance

We can start by borrowing an idea from political philosophy: the veil of ignorance. We imagine that we have the opportunity to construct a society however we wish. Afterwards, we get to take our place within the society. The catch is, we don’t know which place we will take. So we don’t want the society to unfairly favor one group over another, because we may end up taking the unfavored position.

The veil of ignorance is particularly well-suited to this problem, because it’s pretty close to what we’re actually doing. We choose the cultural and social messages that are conveyed to the next generation. And, in order to make that choice, we imagine ourselves in the shoes of the next generation. “If I grew up today…”

[Read more…]

A few things in defense of EA

I’m fairly well off these days. Between having a frugal upbringing, and being a tech worker married to another tech worker with no kids or debt, I think life has obviously been unfair in my favor. I want to give some of it away. For these reasons, I think a lot about the effective altruism (EA) movement, albeit as an outsider.

Most of the stuff I say about EA is fairly critical (and there’s more to come!), but I try to be measured in my criticism, because I don’t think it’s all bad. Compared to a lot of stuff PZ Myers says, I’m practically a supporter. In this article, I offer a begrudging and measured defense.

[Read more…]

Risk neutrality in EA

Effective Altruism (EA) is a community focused on donating money to create the greatest good in the world. This is mostly (?) unobjectionable–but there’s problems. The EA community has a number of philosophical viewpoints that most external observers would consider absurd, and which materially affect their donating behavior.

In particular, many people in EA believe that the most efficient way to create the greatest good is by preventing extinction caused by AI. EA surveys suggest that about 18% of community members donate to “Long term & AI” causes, compared to 62% that donate for global health & development. Clearly concern about AI is not a unanimous viewpoint in EA, but you have to imagine the kind of community where everyone takes it seriously.

EA has been under the spotlight in current news because Sam Bankman-Fried–recently arrested for massive fraud at his cryptocurrency exchange FTX–was a vocal proponent of EA. More than a vocal proponent, he founded the FTX Future Fund, which committed $160M in charitable grants to various EA causes. At the top of Future Fund’s cause list? AI.

Although I’m critical of EA, I actually think it’s a bit unfair to pretend that they’re directly responsible for SBF’s fraudulent behavior. Instead I wanted to focus on some of SBF’s philosophical views, which are shared by at least some parts of the EA community. Specifically, let’s talk about the idea that charitable causes are risk-neutral.

[Read more…]

Regulating data science with explanations

Data science has an increasing impact on our lives, and not always for the better. People speak of “Big Data”, and demand regulation, but they don’t really understand what that would look like. I work in one of the few areas where data science is regulated, so I want to discuss one particular regulation and its consequences.

So, it’s finally time for me to publicly admit… I work in the finance sector.

These regulations apply to many different financial trades, but for illustrative purposes, I’m going to talk about loans. The problem with taking out a loan is that you need to pay it back plus interest. The interest is needed to give lenders a return on their investment, and to offset the losses from other borrowers who don’t pay it off. Lenders can increase profit margins and/or lower interest rates if they can predict who won’t pay off their debt, and decline those people. Data science is used to help make those decline decisions.

The US imposes two major restrictions on the data science. First, there’s anti-discrimination laws (a subject I might discuss at a later time) (ETA: it’s here). Second, an explanation must be provided to people who are declined.

[Read more…]

Review scores: a philosophical investigation

Normally, in the introduction to an article, I would provide a “hook”, explaining my interest in the topic, and why you should be too. But my usual approach felt wrong here, since I cannot justify my own interest, and arguably if you’re reading this rather than scrolling past the title, you should be less interested than you currently are.

So, review scores. WTF are they? I don’t have the answers, but I sure have some questions. Why is 0/10 bad, 10/10 good, and 5/10… also bad? What goals do people have in assigning a score, and do they align with the goals of people reading the same score? What does it mean to take the average of many review scores? And why do we expect review scores to be normally distributed?

Mathematical structure

Review scores are intuitively understood as a measure of the quality of a work (such as a video game, movie, book, or LP)–or perhaps a measure of our enjoyment of the work? Already we have this question: is it quality, or is it enjoyment, or are those two concepts the same? But we must leave that question hanging, because there are more existentially pressing questions to come. Review scores do more than just express quality/enjoyment, they assign a number. And numbers are quite the loaded concept.

[Read more…]

COVID and perspectives on causality

Recently, people have been circulating a statistic from the CDC that says 94% of death certificates listing COVID-19 as a cause of death also list at least one other cause of death. For instance, if someone catches COVID, can’t breathe anymore and dies, perhaps the doctors would also list “Respiratory failure” as one of the causes of death, in addition to COVID. Come to think of it, why do only a third of COVID deaths include respiratory failure as a cause, how exactly is COVID killing people if not by causing respiratory failure?  Before parading around this statistic, I have to ask, do we really understand what it’s even saying?

That misleading statistic came to my attention because a friend wrote a Vox article about it. He brings not a medical perspective, but a psychology perspective, discussing the cognitive biases that make people bad at understanding causation.

Causation is also a favorite topic of mine as well, although I come at it from a different set of perspectives: philosophy, physics, and law. And although I don’t have medical expertise, it’s not hard to find the medical standard of causation from google, so I include that at the end.

[Read more…]

Explaining Roko’s Basilisk

Before I move away from the topic of Rationalism and EA, I want to talk about Roko’s Basilisk, because WTF else am I supposed to do with this useless knowledge that I have.

From sci-fi, a “basilisk” is an idea or image that exploits flaws in the human mind to cause a fatal reaction. Roko’s Basilisk was proposed by Roko to the LessWrong (LW) community in 2010. The idea is that a benevolent AI from the future could coerce you into doing the right thing (build a benevolent AI, obv) by threatening to clone you and torture your clone. It’s a sort of a transhumanist Pascal’s Wager.

Roko’s Basilisk is absurd to the typical person, and at this point is basically a meme used to mock LW, or tech geeks more broadly. But it’s not clear how seriously this was really taken in LW. One thing we do know is that Eliezer Yudkowsky, then leader of LW, banned all discussion of the subject.

What makes Roko’s Basilisk sound so strange, is that it’s based on at least four premises that are nearly unique to the LW community, and unfamiliar to most anyone else. Just explaining Roko’s Basilisk properly requires an amusing tour of multiple ideas the LW community hath wrought.

[Read more…]