Trolleyology bad

In my last post, I offhandedly disparaged the Trolley Problem as a serious thought experiment. Let me elaborate.

Any philosophical thought experiment contains stipulations about what is going on. In the trolley problem, it is stipulated that by flipping the switch, it *will* prevent five deaths, and it *will* cause another person’s death.

Question: do we believe that stipulation? We don’t exactly believe in it, it’s a fictional scenario. But you at least have to accept the stipulation to think about the problem on the level that it was intended.

In the variant of the trolley problem, it is stipulated that by pushing a fat man in front of the trolley, this *will* prevent five deaths, and it *will* cause the death of the fat man.

[Read more…]

Newcomb’s Paradox occurs in real life

Newcomb’s paradox is a philosophical thought experiment. There is an entity called Omega, who can predict your choices. Omega presents you with two boxes; you may open one or both boxes, and take whatever you find. The first box contains $1k, guaranteed. The second box contains $1M if and only if Omega predicts that you will leave the first box alone. So the dilemma is between “one-boxing” (taking only the $1M), or “two-boxing” (taking both boxes, finding a total of $1k).

When I put it that way, it seems obvious that $1M is more than $1k, so therefore you should open only one box. The two-boxer argument is that Omega has already decided whether the box contains $1M or not. So whatever’s in the second box is a constant, and it’s only rational to take the free $1k. Omega may have chosen to arbitrarily punish players who behave rationally, but what’s done is done, might as well collect the $1k consolation prize.

Do we care about Newcomb’s paradox?

Newcomb’s paradox has received a great deal of discussion from Rationalists, i.e. the community popularized by Eliezer Yudkowsky. That’s how I know about the paradox. But I’m an outsider, and it appears to me like Rationalists stared at this paradox for so long that they went mad. Yudkowsky is a dedicated one-boxer, and has attempted to construct elaborate theories to justify it. Some of these ideas were crucial in the construction of Roko’s Basilisk.

I believe the reason Yudkowsky and others are so obsessed with Newcomb’s paradox, is because they’re transhumanists. They believe the future will contain a super powerful AI. To most people Omega sounds fantastical—how can any entity make perfect predictions about our actions? But to a transhumanist, a super powerful AI could easily step into the role of Omega.  Additionally, we can think about what happens when AI steps into the role of the player. If the AI is deterministic, then of course we can predict what the AI will choose. So Yudkowsky’s interest is ensuring that an AI will choose correctly in this situation.

But for the rest of us folks who aren’t transhumanists, does Newcomb’s paradox make sense? Is this a problem we even need to think about?

[Read more…]

I read books: Philosophical Investigations

Philosophical Investigations, by Ludwig Wittgenstein, translated by G. E. M. Anscombe

To steal a description from Existential Comics, Wittgenstein solved philosophy in 1921 with the Tractatus Logico-Philosophicus, and then unsolved it again in 1953 with Philosophical Investigations. Philosophical Investigations is primarily concerned with what we mean with our language. Many 20th century philosophers (including early Wittgenstein) have tried to translate our language into something more precise, as if to uncover what we really mean. Philosophical Investigations argues that meaning is much more complicated, deriving from practical use.

I have a book queue that consists mostly of queer mystery and romance novels, but Philosophical Investigations was an oddball among them. I’ve been interested in Wittgenstein largely as a result of my husband. He has a degree in philosophy, and his seminar on Wittgenstein was particularly impactful. If you want to know what our banter sounds like, it’s not altogether unlike the text of Philosophical Investigations. I had never actually read it though, so I thought to correct that.

[Read more…]

The veil of gender ignorance

One of the common TERF talking points is, “If I grew up today, I would have (wrongly) believed I was trans.” As with many TERF arguments, there’s also an anti-ace analogue: “If I grew up today, I would have sooner believed I was bisexual heteroromantic than just gay.” I’m going to take an analytical approach to understanding and countering these arguments.

The veil of ignorance

We can start by borrowing an idea from political philosophy: the veil of ignorance. We imagine that we have the opportunity to construct a society however we wish. Afterwards, we get to take our place within the society. The catch is, we don’t know which place we will take. So we don’t want the society to unfairly favor one group over another, because we may end up taking the unfavored position.

The veil of ignorance is particularly well-suited to this problem, because it’s pretty close to what we’re actually doing. We choose the cultural and social messages that are conveyed to the next generation. And, in order to make that choice, we imagine ourselves in the shoes of the next generation. “If I grew up today…”

[Read more…]

A few things in defense of EA

I’m fairly well off these days. Between having a frugal upbringing, and being a tech worker married to another tech worker with no kids or debt, I think life has obviously been unfair in my favor. I want to give some of it away. For these reasons, I think a lot about the effective altruism (EA) movement, albeit as an outsider.

Most of the stuff I say about EA is fairly critical (and there’s more to come!), but I try to be measured in my criticism, because I don’t think it’s all bad. Compared to a lot of stuff PZ Myers says, I’m practically a supporter. In this article, I offer a begrudging and measured defense.

[Read more…]

Risk neutrality in EA

Effective Altruism (EA) is a community focused on donating money to create the greatest good in the world. This is mostly (?) unobjectionable–but there’s problems. The EA community has a number of philosophical viewpoints that most external observers would consider absurd, and which materially affect their donating behavior.

In particular, many people in EA believe that the most efficient way to create the greatest good is by preventing extinction caused by AI. EA surveys suggest that about 18% of community members donate to “Long term & AI” causes, compared to 62% that donate for global health & development. Clearly concern about AI is not a unanimous viewpoint in EA, but you have to imagine the kind of community where everyone takes it seriously.

EA has been under the spotlight in current news because Sam Bankman-Fried–recently arrested for massive fraud at his cryptocurrency exchange FTX–was a vocal proponent of EA. More than a vocal proponent, he founded the FTX Future Fund, which committed $160M in charitable grants to various EA causes. At the top of Future Fund’s cause list? AI.

Although I’m critical of EA, I actually think it’s a bit unfair to pretend that they’re directly responsible for SBF’s fraudulent behavior. Instead I wanted to focus on some of SBF’s philosophical views, which are shared by at least some parts of the EA community. Specifically, let’s talk about the idea that charitable causes are risk-neutral.

[Read more…]

Regulating data science with explanations

Data science has an increasing impact on our lives, and not always for the better. People speak of “Big Data”, and demand regulation, but they don’t really understand what that would look like. I work in one of the few areas where data science is regulated, so I want to discuss one particular regulation and its consequences.

So, it’s finally time for me to publicly admit… I work in the finance sector.

These regulations apply to many different financial trades, but for illustrative purposes, I’m going to talk about loans. The problem with taking out a loan is that you need to pay it back plus interest. The interest is needed to give lenders a return on their investment, and to offset the losses from other borrowers who don’t pay it off. Lenders can increase profit margins and/or lower interest rates if they can predict who won’t pay off their debt, and decline those people. Data science is used to help make those decline decisions.

The US imposes two major restrictions on the data science. First, there’s anti-discrimination laws (a subject I might discuss at a later time) (ETA: it’s here). Second, an explanation must be provided to people who are declined.

[Read more…]