That’s an unholy trinity if ever I saw one: Bostrom, Musk, Galton. They’re all united by terrible, simplistic understanding of genetics and a self-serving philosophy that reinforces their confidence in bad ideas. They are longtermists. Émile Torres explains what that is and why it is bad…although you already knew it had to be bad because of its proponents.
As I have previously written, longtermism is arguably the most influential ideology that few members of the general public have ever heard about. Longtermists have directly influenced reports from the secretary-general of the United Nations; a longtermist is currently running the RAND Corporation; they have the ears of billionaires like Musk; and the so-called Effective Altruism community, which gave rise to the longtermist ideology, has a mind-boggling $46.1 billion in committed funding. Longtermism is everywhere behind the scenes — it has a huge following in the tech sector — and champions of this view are increasingly pulling the strings of both major world governments and the business elite.
But what is longtermism? I have tried to answer that in other articles, and will continue to do so in future ones. A brief description here will have to suffice: Longtermism is a quasi-religious worldview, influenced by transhumanism and utilitarian ethics, which asserts that there could be so many digital people living in vast computer simulations millions or billions of years in the future that one of our most important moral obligations today is to take actions that ensure as many of these digital people come into existence as possible.
In practical terms, that means we must do whatever it takes to survive long enough to colonize space, convert planets into giant computer simulations and create unfathomable numbers of simulated beings. How many simulated beings could there be? According to Nick Bostrom —the Father of longtermism and director of the Future of Humanity Institute — there could be at least 1058 digital people in the future, or a 1 followed by 58 zeros. Others have put forward similar estimates, although as Bostrom wrote in 2003, “what matters … is not the exact numbers but the fact that they are huge.”
They are masters of the silly hypothetical — these are the kind of people who spawned the concept of Roko’s Basilisk, “that an all-powerful artificial intelligence from the future might retroactively punish those who did not help bring about its existence”. It’s “the needs of the many outweigh the needs of the few”, where the “many” are padded with 1058 hypothetical, imaginary people, and you are expected to serve them (or rather, the technocrat billionaire priests who represent them) because they outvote you now.
The longtermists are terrified of something called existential risk, which is anything that they fear would interfere with that progression towards 1058 hardworking capitalist lackeys working for their vision of a Randian paradise. It’s their boogeyman, and it doesn’t have to actually exist. It’s sufficient that they can imagine it and are therefore justified in taking actions here and now, in the real world, to stop their hypothetical obstacle. Anything fits in this paradigm, it doesn’t matter how absurd.
For longtermists, there is nothing worse than succumbing to an existential risk: That would be the ultimate tragedy, since it would keep us from plundering our “cosmic endowment” — resources like stars, planets, asteroids and energy — which many longtermists see as integral to fulfilling our “longterm potential” in the universe.
What sorts of catastrophes would instantiate an existential risk? The obvious ones are nuclear war, global pandemics and runaway climate change. But Bostrom also takes seriously the idea that we already live in a giant computer simulation that could get shut down at any moment (yet another idea that Musk seems to have gotten from Bostrom). Bostrom further lists “dysgenic pressures” as an existential risk, whereby less “intellectually talented” people (those with “lower IQs”) outbreed people with superior intellects.
Dysgenic pressures, the low IQ rabble outbreeding the superior stock…where have I heard this before? Oh, yeah:
This is, of course, straight out of the handbook of eugenics, which should be unsurprising: the term “transhumanism” was popularized in the 20th century by Julian Huxley, who from 1959 to 1962 was the president of the British Eugenics Society. In other words, transhumanism is the child of eugenics, an updated version of the belief that we should use science and technology to improve the “human stock.”
I like the idea of transhumanism, and I think it’s almost inevitable. Of course humanity will change! We are changing! What I don’t like is the idea that we can force that change into a direction of our choosing, or that certain individuals know what direction is best for all of us.
Among the other proponents of this nightmare vision of the future is Robin Hanson, who takes his colonizer status seriously: “Hanson’s plan is to take some contemporary hunter-gatherers — whose populations have been decimated by industrial civilization — and stuff them into bunkers with instructions to rebuild industrial civilization in the event that ours collapses”. Nick Beckstead is another, who argues that saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries, … it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
Or William MacAskill, who thinks that If scientists with Einstein-level research abilities were cloned and trained from an early age, or if human beings were genetically engineered to have greater research abilities, this could compensate for having fewer people overall and thereby sustain technological progress.
Just clone Einstein! Why didn’t anyone else think of that?
Maybe because it is naive, stupid, and ignorant.
MacAskill has been the recipient of a totally uncritical review of his latest book in the Guardian. He’s a philosopher, but you’ll be relieved to know he has come up with a way to end the pandemic.
The good news is that with the threat of an engineered pandemic, which he says is rapidly increasing, he believes there are specific steps that can be taken to avoid a breakout.
“One partial solution I’m excited about is called far ultraviolet C radiation,” he says. “We know that ultraviolet light sterilises the surfaces it hits, but most ultraviolet light harms humans as well. However, there’s a narrow-spectrum far UVC specific type that seems to be safe for humans while still having sterilising properties.”
The cost for a far UVC lightbulb at the moment is about $1,000 (£820) per bulb. But he suggests that with research and development and philanthropic funding, it could come down to $10 or even $1 and could then be made part of building codes. He runs through the scenario with a breezy kind of optimism, but one founded on science-based pragmatism.
You know, UVC, at 200-280nm, is the most energetic form of UV radiation — we don’t get much of it here on planet Earth because it is quickly absorbed by any molecule it touches. It’s busy converting oxygen to ozone as it enters the atmosphere. So sure, yeah, it’s germicidal, and maybe it’s relatively safe for humans because it cooks the outer, dead layers of your epidermis and is absorbed before it can zap living tissue layers, but I don’t think it’s practical (so much for “science-based pragmatism”) in a classroom, for instance. We’re just going to let our kiddos bask in UV radiation for 6 hours a day? How do you know that’s going to be safe in the long term, longtermist?
Quacks have a “breezy kind of optimism”, too, but it’s not a selling point for their nostrums.
If you aren’t convinced yet that longtermism/effective altruism isn’t a poisoned chalice of horrific consequences, look who else likes this idea:
One can begin to see why Elon Musk is a fan of longtermism, or why leading “new atheist” Sam Harris contributed an enthusiastic blurb for MacAskill’s book. As noted elsewhere, Harris is a staunch defender of “Western civilization,” believes that “We are at war with Islam,” has promoted the race science of Charles Murray — including the argument that Black people are less intelligent than white people because of genetic evolution — and has buddied up with far-right figures like Douglas Murray, whose books include “The Strange Death of Europe: Immigration, Identity, Islam.”
Yeah, NO.
remyporter says
I think the issue with transhumanism is that it’s built on a cybernetic approach, and by that I mean a directed approach. And once you take that approach, you now have to answer the question: who sets the direction, by what means, and how is that direction achieved?
And that’s where we open a can of worms, because as much as we can imagine a utopian “each individual uses the tools of transhumanism to define and create their best self”, the dystopia is both easier to imagine and seems much more plausible. As it is, we don’t allow the technological tools we have to be employed by individuals to self-actualize- we use technology to drive a capitalist engine forward to the benefit of a few on the backs of the many. And it’s unlikely that we could truly democratize the technologies of transhumanism- the last attempt at a widely democratized network of networks ended up giving us Facebook, Amazon, and Google, who emphatically are not on the side of individual humans.
hemidactylus says
So are Bostrom and other longtermists promoting Quiverfull ideology for quasi-sentient bots? If Musk is himself sentient he sure lacks self-awareness.
I had read this on Huxley:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4366572/
Mayr too flirted with positive eugenics. When I pointed this out to Coyne not too long ago in email he eventually posted about it. I still like Mayr overall but his letter to Crick was unnerving:
https://collections.nlm.nih.gov/ext/document/101584582X183/PDF/101584582X183.pdf
SchreiberBike says
It takes talent and a lot of self-centeredness to make long term thinking into a bad thing.
Angle says
I am now morbidly curious – has anyone ever actually tried to clone Einstein? (Or more likely, to get permission and/or resources to clone Einstein?) I assume it never got very far if so, but surely someone had to be crazy enough to try? XD
whywhywhy says
Seems like they want to trade in “every sperm is sacred” for “every bit is sacred”. I don’t see the improvement…
Reginald Selkirk says
The same goes for those 10^58 computer-simulated “people.” Why not just 10^58 copies of one simulation?
Why are these simulations needed anyway? Can anyone verify that they will exist? It sounds an awful lot like religious ideas of heaven.
raven says
That is quite a wall of crazy that PZ posted. This stood out.
And the data that supports this idea is what?
There isn’t one single bit of data.
This sounds a lot like xianity though.
God created the universe 13.8 billion (or 6,000) years ago and jesus is going to show up any minute now, kill 8 billion people, and destroy the earth. Because he loves us.
raven says
Sam Harris is reliably an idiot.
There is no such thing as Western Civilization!!!
There is a large collection of loosely related civilizations that borrow from each other, cooperate with each other, fight wars with each other, and change rapidly in Real Time.
The western civilization that Sam Harris lives in doesn’t look at all like the one I live in. Or would want to live in.
The Northern California version that I live in now doesn’t look much like the far northern culture that I was born and raised in, the one called, “dead end nowheresville”.
We aren’t at war with Islam. Islam is mostly at war with…Islam. A lot of Muslims might not like us much but we are over there. They really don’t like each other and they are all a lot closer to each other. Right now, it is Saudi Arabia on one side of the perpetual Yemen civil war, the Syrian civil war, and the cold war between Sunnis and Shiite Iran.
Marcus Ranum says
I doubt that Bostrom actually believes he is in a simulation – it’s just a position he can take when he wants a good philosophowank that lets him feel smart. And philosophers wonder why people make fun of them.
I’m curious what Spencer’s childhood was like. Bostrom and Musk show the marks of smart kids growing up feeling unappreciated, trying to make up for it later in life. Musk’s parenting was catastrophic – some people should not raise kids and his father is a good example of how that works. There is a behind the bastards episode about Musk that leaves the listener split between hatred and pity.
consciousness razor says
Life is full of suffering. Why wouldn’t our obligation be to avoid suffering for this stupidly large number of possible individuals, simply by not making them in the first place?
In the real world, industry hacks in the EU last year (as I understand it, against the objections of a few philosophers on the commission) made sure their new “plan” would commit to no real binding laws or any “long-term” thinking of any kind, regarding the creation and “use” of general AIs, artificial consciousness, etc.
They didn’t want a moratorium on that kind of stuff for as long as we’ve got no fucking clue what the fuck we’re doing and can’t possibly know how to make sense of all the ethical issues that raises. They’re practically demanding that any old asshat should be able to go in with guns blazing and no regard whatsoever for the rights of artificial persons, while nobody else should have anything to say about it. (No, human beings will not “upload” themselves into computers, so I won’t even bother with that bullshit. But a regular old conscious artificial being is definitely not out of the question, and their pain and suffering would be just as real. That’s what they’re really talking about when they act like it’s a moral imperative.)
As always and much like the genetic engineering shit you brought up in a previous thread, it’s all about turning sentient beings into products, a way to exploit them for fame and fortune. In this case, it’s a chance to say that you (out of sheer grit and determination and genius) have (all by yourself) done something that nobody has done before … which is I guess is making babies for our entertainment. Or for something? I guess? I don’t know what it’s for.
So, 10^58 digital persons will fix all of that … how exactly? Wut? Huh?
Pretty sure the holodeck version of him on TNG came first. Good old Barclay, taking care of business.
raven says
It’s been done sort of.
He did have children. They at least have half of his genome.
And grandchildren and great grandchildren.
They seem to be smart people but they don’t seem to be super-geniuses like the first one.
birgerjohansson says
Einstein’s first son from his first marriage seems to have gotten schizophrenia.
I would be more interested in a clone of Srinivasa Ramanujan.
LykeX says
Why worry about this humongous number of potential future people when we can’t manage the ones we have right now? Why plan ahead for millennia, when we still haven’t learned how to plan beyond the next election?
In principle, I like trans-humanism. In practice, I think it’s often missing the forest for the trees. We’ve got much more pressing problems than rogue AIs and engineered super-bugs.
specialffrog says
In the Iain Banks book “Surface Detail” some civilizations use this kind of technology to enforce their ideas of an afterlife — people are uploaded to hell if they are deemed to deserve it. Which leads to other people trying to find and destroy the servers running hell.
Snidely W says
Oh, great.
In that great Venn diagram that is humanity, here’s another group of folks that I get to shove into that corner* with the people that are just fucking nuts. And for such unanticipated reasons.
JHFC.
*my Venn diagrams have free-form shapes.
christoph says
Sounds like someone fantasizing about being in the Matrix. I just looked up Longtermism, the term was coined in 2017.
consciousness razor says
christoph:
“Transhumanism” has the trans associations they probably don’t like, “futurism” was sort of tainted by fascists, and I guess “futurologyism” just sounds a bit dimwitted.
So when you want to say “please won’t somebody think of the children (which we need to torture and enslave)?” the term for that is apparently longtermism, which might last for a few years. If only they had named it after Liz Cheney, so we could already be done with it.
Akira MacKenzie says
Well yeah, especially if you happen to live in the time the existential risk threatens to kill you.
birgerjohansson says
Why have many small computer simulations when you can use that power för one very big AI? I am thinking along the lines of Colossus in The Forbin Project (people of PZ;s generation will know what I mean).
Or alternatively :
“-is there a god?”
“Yes, now there is a god.”
birgerjohansson says
To keep the AI numbers down, maybe they can be made mutually hostile?
https://youtu.be/jv-NI4X_Amg
Ray Ceeya says
Way back in the 60s, Isaac Asimov published an anthology called “Earth is Room Enough”. This planet is our home. We need to take care of it. We can’t just rape mother Earth and run away. All that sci-fi tech these guys see not only doesn’t exist but may NEVER exist. Terraforming Mars and Venus? Bunk. FTL probably physically impossible. Cryogenics, also probably impossible. Generation ships? Well Biosphere couldn’t keep a couple dozen people alive for a single year. The Martian, It would be like growing potatoes in salt. The old science thing where kids grow a plant in a little cup, yeah try that with a cup full of rust and salt. It won’t work.
Susan Montgomery says
@19 If we’re lucky, it’s Colossus. An AI derived from the collective human consciousness represented by the Internet (that is, crypto scams, cat memes and hardcore pornography) would be much more like Harlan Ellison’s AM than the rational Colossus.
John Morales says
Quite the contrast with https://en.wikipedia.org/wiki/Long_Now_Foundation
mastmaker says
Eugenics is like that planet (too lazy to look up) in Hitchhiker’s guide. They got rid of all the useless “mid-level” people including telephone sanitizers by putting them on a rocket (bound to earth as it turned out), telling them rest will follow. The entire planet benefited from the absence of the said “mid-level” folks and thrived……… until they all mysteriously died off due to an infection contacted from a dirty telephone!
drsteve says
They did try to clone Albert Einstein, but with mixed results. The subject demonstrated notable skill as a filmmaker and comedian, but contrary to hopes was unable to make any significant contributions to the unification of quantum mechanics and general relativity.
consciousness razor says
drsteve:
It’s Jordan Peele, isn’t it? I bet it’s Jordan Peele. He’s only in his forties, you know. He’s still got time.
IX-103, the ■■■■ing idiot says
I’m not sure I trust that piece from Salon. It has the typical formulation of yellow journalism – present a Boogeyman (Elon Musk), point how it is tangentially related to something common and strong, and then relate that boogyman to something else your audience hates (transhumanism, eugenics). Or maybe the author of the article just misinterpreted the explanation they were given and wasn’t trying for sensationalism. Either way it doesn’t seem worth the paper it’s written on.
From what I’ve read it longtermist works (admittedly those were more mainstream, I don’t have experience with the fringe), it really is just a focus on improving survival of the human species. That doesn’t necessarily mean colonizing Mars and certainly says nothing about uploading 10^58 people into a computer. It just places a value of humanity continuing to exist as a species for an arbitrarily long time.
Longtermism draws a lot ideologically from both the Long Now Foundation and Effective Altruism. If you happen to believe that humanity has value beyond all other (non-sentient) animals, organisms, or collections of atoms, then preserving the human species is rational.
I’m not a longtermist myself, as I don’t think humanity as a species is intrinsically valuable. But I think I understand where they are coming from.
drsteve says
@26: I’d call it a reasonable claim that Peele carries some of this Einstein’s DNA, in a certain non-literal sense:
https://en.wikipedia.org/wiki/Albert_Brooks
John Morales says
CR, Young Einstein.
unclefrogy says
as described and usually advocated it does have the strong feel of religion specifically western religion. the whiff of eternal life is strong as regards to humans any way. The emphasis on picking the superior over the inferior as evolutionary ends seems based entirely on race, nationality and class and is completely ignoring the criteria that “natural selection” has favored these many millions of years. the only reason these ideas can even be thought of at all in a serious way is the result of the unusually prosperous and stable period the “west” is going through just now. It allows for the existence of the pampered rich and “intellectuals” the luxury to indulge in idle BS
see @6 & 21
Raging Bee says
@27: What I want to know is, who, if anyone, is actually funding the loony branch of this school of thought?
consciousness razor says
John Morales, #29:
Australian cinema at its finest.
John Morales says
CR, ouch.
lanir says
People that go in for eugenics in any form are weird. Most of the time they think they’re promoting intelligence. But intelligence isn’t very well defined and definitely doesn’t seem to be a strictly inherited set of genes. So the ignorance of the eugenics supporter for their ideology means… their ideology eliminates them as desirable outcomes because they believe in it?
As philosophy oopses go, that’s just embarrassingly bad.
lochaber says
this whole “we are living in a computer simulation” thing reminds me of that intro to philosophy class some decades back…
blah blah blah, if you can imagine something perfect, it can only be more perfect by existing, therefore god
blah blah blah, if a simulation can have a simulation, there are infinite simulations, and therefore we are in a simulation
anyways, SchreiberBike@3 pretty much nailed it
Jim Balter says
Bostrom (with his nostrums) is bad at math and worse at philosophy. There is no moral imperative to bring non-existent people into existence … shades of fetusphile forced birthers.
@10
No, fool; learn to read:
“one of our most important moral obligations today is to take actions that ensure as many of these digital people come into existence as possible”
The “existential risk” they are talking about is to these fantasized digital people, not to us. So we supposedly have a moral obligation to avoid nuclear wars etc. not for our own sake but because that would prevent the fantasy from being realized–it takes reasonable concern for “future generations” to stupid extremes. There’s nothing in that about the fantasized digital people helping to prevent it (other than motivating us for their sake), dummy.
@27
Another fool who can’t read: “According to Nick Bostrom —the Father of longtermism and director of the Future of Humanity Institute — there could be at least 10^58 digital people in the future” — what you are calling “the fringe” is the mainstream of longtermism. You are talking about reading transhuman works, of which longtermism is a fringe but becoming more mainstream. And transhumanism has never been as benign as you make out; in addition to “transhumanism is the child of eugenics” it’s infused with nonsense about “the singularity”, the supposed moment when AIs take over and make human civilization obsolete. A prominent transhumanist is Ray Kurzweil, a genuine inventive genius and a crank, who pops hundreds of pills and plans to be frozen when he dies, all so his body will survive until the singularity, at which point the AIs will supposedly upload and immortalize him.
All this nonsense is a bit like Pascal’s Wager, based on an unreasonable expectation that one particular possibility (if even) out of a vast number of alternatives is reality.
Jim Balter says
@21 Yes, a lot of people seem to forget that the F in SF stands for “fiction” (and the S stands for “speculative”). I occasionally peruse the r/AskScience sub on Reddit and I see a lot of this stuff from people who haven’t grasped that, even if any of it were remotely possible, it depends on human civilization surviving the sixth extinction that it almost certainly isn’t going to survive.
astringer says
John Morales @ 23
Awesome! Pharyngula’s second ref to Brian Eno in a week (previous via Spiders->Bowie therefore “Heroes”)
Jim Balter says
Gee, I wonder if that has anything to do with biological determinism being crap?
It’s worth remembering that Einstein occupied a critical moment … people like Maxwell, Faraday, Lorentz, Michaelson, and Morley set the stage. And when it came to quantum mechanics, people like Neils Bohr were his betters. Perhaps his real genius was in taking walks: https://qz.com/work/1494627/einstein-on-the-only-productivity-tip-youll-ever-need-to-know/
Jim Balter says
I attended his first public appearance, at the Humanist Society of Santa Barbara back in 2004 (https://www.swt.org/robert/writ/samharris.htm). He seemed like an admirable “new atheist” back then … little did we know. I first realized where he was heading when I read his op-ed in the L.A. Times in which he attacked liberals for thinking the U.S. foreign policy had anything whatsoever to do with Islamic extremism in the Middle East.
specialffrog says
@27: EA is itself fringe: https://rationalwiki.org/wiki/Effective_altruism
IX-103, the ■■■■ing idiot says
@36: You probably should learn to read. I mentioned that I don’t trust the that article, so what good is citing the article at me going to do. Further, I directly addressed the point it appears you are trying to make in the sentence before the one you quote:
“Or maybe the author of the article just misinterpreted the explanation they were given”
I have heard some attempts at a utilitarian explanation of longtermism justifying based on an arbitrary number of humanity’s descendants. I could imagine someone that is also transhumanist taking that a step to an arbitrary number of digital people, but transhumanism is not a core part of longtermism, and neither is the concept of a singularity (which actually seems like one of the existential risks longtermism would try to avoid).
If you’ve only encountered longtermism in transhumanist works, then all I can say is you need to branch out more. That doesn’t mean longtermism is a transhumanist-only thing any more than eugenics is a Nazi-only thing.
Roberto Aguirre Maturana says
This longtermism thing sounds like Australopithecus taking measures to ensure the wellbeing of Homo sapiens.
KG says
How about the eyes?
brightmoon says
Transhumanism? We can’t even get robotic prosthetics to people who need them and they’re difficult to use. I’m the first person to say ivory tower stuff is great but let’s get it to actually work too.
consciousness razor says
I wasn’t just reading off what somebody else had stated. I was drawing conclusions about it for myself, and it occurs to me that this crap will not be helpful, which should concern these people if they’re being serious. I’ll say a little more about that.
Don’t you think that the enormous efforts that would be required to create and maintain them would significantly affect our ability to deal with climate change and so forth? It’s not like somebody would just build this in their garage. It would mean huge investments on a long-term, global project. (That’s why you’re supposed to offer whatever you’ve got to the collection plate for their
scamwondrous and morally obligatory business opportunity.) The energy requirements alone would be huge. Think crypto is bad for the environment? This would be worse than that.While we’re still in the long process of trying to make these digital people, they will not be helping us with climate change, because they don’t yet exist and can’t do anything. We do not currently and will not lack a motivation for dealing with climate change, and those will also serve as a distraction to it at best during this phase. After we’ve made enough progress on it that there are at least some digital people, we would be in worse shape with regards to dealing with the effects of climate change. Also, once they actually exist, they still won’t be helping us in any capacity with problems like climate change, for a different reason now: because they’re digital people and it’s not clear there’s anything they could do which could have any impact on things like that.
I think all of that matters and that all of the confused noises that they make about such risks are just a lot of hand-wringing bullshit. There’s no sense in which this is doing anything constructive about the problems they purport to care about, whether their concerns are about humans or digital people or anyone else who might be affected by them. That’s a pretty good reason to reject their ridiculous claims, in case you didn’t already have half a dozen other good reasons to do so.
birgerjohansson says
Step one: save Earth (stopping climate change, ocean acidification, deforestation, reduce and stop population growth et cetera)
Step two: generate enough resources to invest in long-term projects.
See you sometime after 2050 för step 3.
IX-103, the ■■■■ing idiot says
@43. Concisely put! I’ll have to remember that one.
birgerjohansson says
I looked at the image. If you fuse Musk and Galton you get Mussolini.
.
The leftover Boström looks like the unfunny guy in Ghost Busters (Rick Moranis) who gets eaten by a Terror Dog .
Howard Brazee says
Some people talk about how much better we would be without religions. But the human nature that pushes the worst parts of religions is still here, and will find outlets that are just as crazy.
We see lots of evidence of that.
patricklinnen says
There is a lot of handwavium in “We will live in a simulation in the future.” Like, who operates and maintains the H/W and OS? (Skipping the little-endian vs bid-endian aspects.) Not to mention things like upgrades and supply chain issues. Even with “Star Trek” matter creation technologies, there will be a point of diminishing returns. We could also consider root hacks and fatal errors.
Then there is the matter of the simulation. THE simulation? Which one? 1950’s? 1850’s? 1550’s? Will it be “Leave it to Beaver’/”Father Knows Best”? SCA’s “Chivalry and Chocolate” or “Knights in Shining Armor and Rocket Ships?” “Galt’s Gulch Gone Wild?” Afrofuturistic, Mesoamerican cyber-punk, or South-East Asian / Polynesian mythos. (funny how future simulated peon^H^H^H^Hperson will always be White, Northern European (if not British upper-crust), and Masculine.) Or something magical, NPCs optional. If every simulated person is the main character, does the simulation of a simulated pig-herder NPC count as a person?
Not to mention, will the separate instances (virtual images sharing the same executing space is almost a given currently) be separate, or can they connect/transfer?
birgerjohansson says
Howard @ 50
“will find outlets that are just as crazy”
.
-Let me introduce you to a leader who deliberately do not want to have an assessment of what government policy will lead to!
https://youtu.be/5lBTzXEJ8VA
It brings me back …
“do you drive with your eyes open or are you, like, using the Force?”
chrislawson says
Bostrom’s long-termism is a dreadful philosophy, essentially a moral Ponzi scheme that requires an uncountably large number of future investors, and Torres makes a lot of good points, but…
Transhumanism is not “the child of eugenics” just because Julian Huxley promoted both of them. Yes, there are links between the two, but there are links with Fisher’s exact test too. JBS Haldane wrote some of the earliest transhumanist arguments (even earlier than Huxley — 1924 vs. 1957) and was a vocal opponent of eugenics.
(Also, a minor quibble: UV light is generally taken to be 10 nm to 400 nm wavelength, which means UV-C is far from the most energetic band of UV light.)
Christopher Stephens says
The philosopher Kieran Setiya has a nice (critical) review of MacAskill’s book in the Boston Review. He points out some of the big problems with Longtermism’s moral views.
https://bostonreview.net/articles/the-new-moral-mathematics/
consciousness razor says
Christopher Stephens, #54:
Yeah, that’s a really nice article….
Also relevant…. Even if we’re looking at things from a utilitarian perspective (whatever you may think of that), what you do is add positives and negatives while aiming for the largest amount of “utility” possible. But this does not imply that you must focus on boosting up whatever “positives” there may be, although that might at first sound sort of appealing.
(Much less does it mean you must treat something like “a sentient being exists” as if it must be a positive thing, which is just silly and ought to be enough to get you laughed off the stage at TEDx or wherever you’re saying this stuff. Setiya is much more patient and thorough with it than I am, but suffice it to say that this assumption is trash.)
Getting back to the point, your approach can certainly be to first and foremost reduce suffering for anybody capable of experiencing it, not to try to make some already-pretty-happy people even more happy or something along those lines. You do not have to take a sort of trickle-down approach and try to make wealthy people even wealthier (to borrow from some familiar garbage economics). You can instead focus on lifting up people at the bottom by aiming to reduce whatever suffering they might endure. That achieves the goal of increasing “utility” just as well, because in life there are lots of negatives to work with and not only a bunch of positives, and there just isn’t anything about utilitarianism as such which implies otherwise.
So you don’t have to think somebody ought to suffer more in order to benefit “the greater good” or what have you, where that amounts to making people who are doing pretty well to begin with even more cozy and comfortable than they already were (and just accepting that this is offset by a “smaller amount” of suffering for others, whatever that’s supposed to mean). That is in fact how some fairly deranged self-serving people in history have thought about these things, but nothing about a utilitarian framework (or, say, any such attempt to quantify anything) forces us to make those kinds of choices.
StevoR says
@ 37. Jim Balter :
That seems like a pretty big call to me. I doubt our current civilisation – if we can call it that – will survive but our our whole species? Humans are remarkably adaptable and tough and collectively hard to kill off completely as a species. I can imagine we’ll have a massive population crash (say even 99%) and lose most of our technology and knowledge and have some sort of post-apocalptic horrorshow existence for centuries and millennia afterwards. I think whilst its is a possibility that our species goes completely extinct or is transformed into a new species or three; the odds on us totally vanishing completely are fairly remote. Which isn’t that optimistic a forecast given we’re in the process of losing the world as we know it right now.
I see some appeal as an SF fan & erstwhile writer (& human individual) in bold visions of Humanity’s future and even in working to see them develop from where we are currently but yeah, I think the Longtermists discussed here go way too far with that and prioritise a hypothetical millennia distant future wa-aay too much. Hurting people living now to try and achieve some dream of the the year 9000 or so? Yeah, really not okay or acceptable or cool at all.
@43.Roberto Aguirre Maturana :“This longtermism thing sounds like Australopithecus taking measures to ensure the wellbeing of Homo sapiens.”
Great line but they kinda did by surviving and evolving and leaving us parts of their genetic legacy. Of course, they almost certainly didn’t do that deliberately or have a vision of who and what we’d becoem but still.
rietpluim says
These “visionaries” do nothing but endlessly ruminating science fiction ideas from the seventies.
erik333 says
@35 lochaber
The nested simulations idea is so stupid it boggles the mind. Any simulation would have overhead in encoding information and processing, the more recursive the system gets the less space would there be for conciousnesses. If our current universe is any measure, the available space to run simulation out of the observable universe is so small that thinking we’re in a simulation is absurdum stupid.
Ian King says
The whole idea strikes me as nothing more than an attempt to place a thumb on the scale of reason.
The greatest good for the greatest number isn’t a bad starting point, and I’d argue that regular, reliable public transport, universal healthcare and a decent minimum wage are all consequences of that outlook. Unfortunately the awful ghouls who oppose all of these sensible, humanist proposals have decided to imagine functionally infinite numbers of people in order to skew the calculations to the point where they can justify literally anything, up to and including slavery and genocide.
anoni signup says
Is there any logic anywhere in this diatribe?
Such a mess of false connections just to try and trash those you are so envious of.
PZ Myers says
Envious? Of Bostrom, Musk, and Galton? That’s projection. You might be envious, but no, in my case that’s like suggesting I’m envious of Trump. Laughable.
Raging Bee says
So are Bostrom and other longtermists promoting Quiverfull ideology for quasi-sentient bots?
I see a few different strands of “thought” feeding into this asinine delusion. First, yes, there’s the Quiverfull, make-as-many-babies-as-possible-cuz-that’s-what-we-exist-to-do ideology, which Musk seems to have learned from his Afrikaner dad. There’s also straight-up eugenics, with all the white-supremacist baggage it’s had from day one. And of course there’s pure escapism through and through — insisting that nothing in the present or even the remotely-foreseeable future matters as much as one’s own fantasy of a far-future cyber-fantasy-world is about as escapist as one could possibly get. And all of it cranked up to the 58th power just cuz big numbers are totally awesome (at least to an overgrown child). (But 10^58 is still not bigger than a googleplex, is it? Why can’t we have a googleplex simulated people in the Matrix? That’d be even more awesome!)
If anyone wants more book suggestions about this fantasy, I suggest “The Uploaded” by Ferrett Steinmetz (what were his parents thinking when they picked that first name?!). It’s about a not-so-far future where meatspace humans are literally outvoted by dead humans whose personalities have been copied/uploaded into a complete artificial cyber-reality; and living humans in the real meatspace world are forced to impoverish and immiserate themselves to feed the needs and wants of the dead/uploaded so they can keep on having fun adventures and never have to sacrifice or worry about limited server-space for their games.
Raging Bee says
anini: care to explain who here is allegedly “jealous” of what, exactly?
hemidactylus says
I watched A Glitch in the Matrix with a sense of amusement. Bostrom is obnoxious to me but not to the extent Musk is with his techbro arrogance.
Simulation presumptions and how they can be cashed out in reality were explored especially via those gamers treating others as mere non player characters. They also touched on the Mandela effect as a purported indicator of simulation. Rebecca Watson has stuff to say on that. C3PO’s silver leg has me convinced:
https://skepchick.org/2022/08/did-curious-george-have-a-tail-study-examines-the-mandela-effect/
The Matrix for me was a way of looking at Berkleyian Idealism or gross misogynist Schopenhauer’s spin on Mayan veiling. I wasn’t even remotely aware of pomo Jean Baudrillard at the time. All these freak minded simulation loving tech bros got surreptitiously trolled by a postmodernist trope (eg- the Gulf War and 9-11 did not really take place) influencing moviemakers. Priceless frickin’ irony is a delicious thing. Musk is a pretentious dweeb.
timgueguen says
The big thing that drives belief in the Mandela Effect is that people think their memories are unchanging and 100 percent accurate, and not accepting that they aren’t “video verite.” I’ve actually seen people argue that the The Flintstones were called The Flinstones at some point.
DanDare says
The ten to the umpty power digital humans in the future argument makes me think of a similar one about imagining the greatest possible being.