Who needs religion when you’ve got these clowns promoting bad ideas?

That’s an unholy trinity if ever I saw one: Bostrom, Musk, Galton. They’re all united by terrible, simplistic understanding of genetics and a self-serving philosophy that reinforces their confidence in bad ideas. They are longtermists. Émile Torres explains what that is and why it is bad…although you already knew it had to be bad because of its proponents.

As I have previously written, longtermism is arguably the most influential ideology that few members of the general public have ever heard about. Longtermists have directly influenced reports from the secretary-general of the United Nations; a longtermist is currently running the RAND Corporation; they have the ears of billionaires like Musk; and the so-called Effective Altruism community, which gave rise to the longtermist ideology, has a mind-boggling $46.1 billion in committed funding. Longtermism is everywhere behind the scenes — it has a huge following in the tech sector — and champions of this view are increasingly pulling the strings of both major world governments and the business elite.

But what is longtermism? I have tried to answer that in other articles, and will continue to do so in future ones. A brief description here will have to suffice: Longtermism is a quasi-religious worldview, influenced by transhumanism and utilitarian ethics, which asserts that there could be so many digital people living in vast computer simulations millions or billions of years in the future that one of our most important moral obligations today is to take actions that ensure as many of these digital people come into existence as possible.

In practical terms, that means we must do whatever it takes to survive long enough to colonize space, convert planets into giant computer simulations and create unfathomable numbers of simulated beings. How many simulated beings could there be? According to Nick Bostrom —the Father of longtermism and director of the Future of Humanity Institute — there could be at least 1058 digital people in the future, or a 1 followed by 58 zeros. Others have put forward similar estimates, although as Bostrom wrote in 2003, “what matters … is not the exact numbers but the fact that they are huge.”

They are masters of the silly hypothetical — these are the kind of people who spawned the concept of Roko’s Basilisk, “that an all-powerful artificial intelligence from the future might retroactively punish those who did not help bring about its existence”. It’s “the needs of the many outweigh the needs of the few”, where the “many” are padded with 1058 hypothetical, imaginary people, and you are expected to serve them (or rather, the technocrat billionaire priests who represent them) because they outvote you now.

The longtermists are terrified of something called existential risk, which is anything that they fear would interfere with that progression towards 1058 hardworking capitalist lackeys working for their vision of a Randian paradise. It’s their boogeyman, and it doesn’t have to actually exist. It’s sufficient that they can imagine it and are therefore justified in taking actions here and now, in the real world, to stop their hypothetical obstacle. Anything fits in this paradigm, it doesn’t matter how absurd.

For longtermists, there is nothing worse than succumbing to an existential risk: That would be the ultimate tragedy, since it would keep us from plundering our “cosmic endowment” — resources like stars, planets, asteroids and energy — which many longtermists see as integral to fulfilling our “longterm potential” in the universe.

What sorts of catastrophes would instantiate an existential risk? The obvious ones are nuclear war, global pandemics and runaway climate change. But Bostrom also takes seriously the idea that we already live in a giant computer simulation that could get shut down at any moment (yet another idea that Musk seems to have gotten from Bostrom). Bostrom further lists “dysgenic pressures” as an existential risk, whereby less “intellectually talented” people (those with “lower IQs”) outbreed people with superior intellects.

Dysgenic pressures, the low IQ rabble outbreeding the superior stock…where have I heard this before? Oh, yeah:

This is, of course, straight out of the handbook of eugenics, which should be unsurprising: the term “transhumanism” was popularized in the 20th century by Julian Huxley, who from 1959 to 1962 was the president of the British Eugenics Society. In other words, transhumanism is the child of eugenics, an updated version of the belief that we should use science and technology to improve the “human stock.”

I like the idea of transhumanism, and I think it’s almost inevitable. Of course humanity will change! We are changing! What I don’t like is the idea that we can force that change into a direction of our choosing, or that certain individuals know what direction is best for all of us.

Among the other proponents of this nightmare vision of the future is Robin Hanson, who takes his colonizer status seriously: “Hanson’s plan is to take some contemporary hunter-gatherers — whose populations have been decimated by industrial civilization — and stuff them into bunkers with instructions to rebuild industrial civilization in the event that ours collapses”. Nick Beckstead is another, who argues that saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries, … it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal. Or William MacAskill, who thinks that If scientists with Einstein-level research abilities were cloned and trained from an early age, or if human beings were genetically engineered to have greater research abilities, this could compensate for having fewer people overall and thereby sustain technological progress.

Just clone Einstein! Why didn’t anyone else think of that?

Maybe because it is naive, stupid, and ignorant.

MacAskill has been the recipient of a totally uncritical review of his latest book in the Guardian. He’s a philosopher, but you’ll be relieved to know he has come up with a way to end the pandemic.

The good news is that with the threat of an engineered pandemic, which he says is rapidly increasing, he believes there are specific steps that can be taken to avoid a breakout.

“One partial solution I’m excited about is called far ultraviolet C radiation,” he says. “We know that ultraviolet light sterilises the surfaces it hits, but most ultraviolet light harms humans as well. However, there’s a narrow-spectrum far UVC specific type that seems to be safe for humans while still having sterilising properties.”

The cost for a far UVC lightbulb at the moment is about $1,000 (£820) per bulb. But he suggests that with research and development and philanthropic funding, it could come down to $10 or even $1 and could then be made part of building codes. He runs through the scenario with a breezy kind of optimism, but one founded on science-based pragmatism.

You know, UVC, at 200-280nm, is the most energetic form of UV radiation — we don’t get much of it here on planet Earth because it is quickly absorbed by any molecule it touches. It’s busy converting oxygen to ozone as it enters the atmosphere. So sure, yeah, it’s germicidal, and maybe it’s relatively safe for humans because it cooks the outer, dead layers of your epidermis and is absorbed before it can zap living tissue layers, but I don’t think it’s practical (so much for “science-based pragmatism”) in a classroom, for instance. We’re just going to let our kiddos bask in UV radiation for 6 hours a day? How do you know that’s going to be safe in the long term, longtermist?

Quacks have a “breezy kind of optimism”, too, but it’s not a selling point for their nostrums.

If you aren’t convinced yet that longtermism/effective altruism isn’t a poisoned chalice of horrific consequences, look who else likes this idea:

One can begin to see why Elon Musk is a fan of longtermism, or why leading “new atheist” Sam Harris contributed an enthusiastic blurb for MacAskill’s book. As noted elsewhere, Harris is a staunch defender of “Western civilization,” believes that “We are at war with Islam,” has promoted the race science of Charles Murray — including the argument that Black people are less intelligent than white people because of genetic evolution — and has buddied up with far-right figures like Douglas Murray, whose books include “The Strange Death of Europe: Immigration, Identity, Islam.”

Yeah, NO.

Wanking over the Drake Equation, again

Oh, this is so silly. It’s a paper titled A Statistical Estimation of the Occurrence of Extraterrestrial Intelligence in the Milky Way Galaxy. All it is is an exercise in modeling the hypothetical distribution of hypothetical intelligent life in the galaxy, taking into account the age distribution of stars.

In the field of Astrobiology, the precise location, prevalence and age of potential
extraterrestrial intelligence (ETI) have not been explicitly explored. Here, we address these
inquiries using an empirical galactic simulation model to analyze the spatial-temporal variations
and the prevalence of potential ETI within the Galaxy. This model estimates the occurrence of ETI,
providing guidance on where to look for intelligent life in the Search for ETI (SETI) with a set of
criteria, including well-established astrophysical properties of the Milky Way. Further, typically
overlooked factors such as the process of abiogenesis, different evolutionary timescales and
potential self-annihilation are incorporated to explore the growth propensity of ETI. We examine
three major parameters: 1) the likelihood rate of abiogenesis (λA); 2) evolutionary timescales (Tevo);
and 3) probability of self-annihilation of complex life (Pann). We found Pann to be the most
influential parameter determining the quantity and age of galactic intelligent life. Our model
simulation also identified a peak location for ETI at an annular region approximately 4 kpc from
the Galactic center around 8 billion years (Gyrs), with complex life decreasing temporally and
spatially from the peak point, asserting a high likelihood of intelligent life in the galactic inner
disk. The simulated age distributions also suggest that most of the intelligent life in our galaxy are
young, thus making observation or detection difficult.

<sigh>. Why? I sympathize with the idea of having fun with math, but the Drake equation is simple-minded algebra, not particularly interesting, and isn’t going to produce testable results.The authors seem to have confused their model with reality. This makes no sense:

We also concluded that at the current time of the study, most intelligent life in the Galaxy is
younger than 0.5 Gyr, with values of probability parameter for self-annihilation between 0 – 0.01;
with a relatively higher value of the annihilation parameter (≥ 0.1), most intelligent life is younger
than 0.01 Gyr. As we cannot assume a low probability of annihilation, it is possible that intelligent
life elsewhere in the Galaxy is still too young to be observed by us. Therefore, our findings can
imply that intelligent life may be common in the Galaxy but is still young, supporting the optimistic
aspect for the practice of SETI. Our results also suggest that our location on Earth is not within the
region where most intelligent life is settled, and SETI practices need to be closer to the inner
Galaxy, preferably at the annulus 4 kpc from the Galactic Center.

But…but…they’re talking about the parameters of their simulation! Their “probability parameter for self-annihilation” is something they set. All of the numbers they plug in are guesstimates, with varying degrees of reasonable justification. Of course they make an optimistic conclusion about SETI! But why should anyone accept their conclusions about an appropriate region for searching for intelligent life? Fudge their parameters a little more and you could shift the zone of likelihood where ever you want. They’ve added nothing to our understanding of the universe, unless you think that multiplying a bunch of numbers by a different bunch of numbers giving you a new result is earthshaking.

I really have to ask…why don’t reviewers simply stamp papers that are all about manipulating the Drake equation with a big red REJECT label? It would save them time and reduce the clutter in the scientific literature. Is there any value in YAWOD (Yet Another Wank Over Drake)? Who finds these informative?

The Matrixpunk esthetic must die

Sorry, I invented a label. It’s to describe a nonsensical fad that I keep running into. It’s like steampunk: romanticizing the Industrial Revolution by putting gears on your top hat, imagining a world run on the power of steam with gleaming brass fittings, rather than coal miners coughing their lungs out or child labor keeping the textile mills running for 16 hours a day, limbs getting mangled in the machinery. Or cyberpunk, a dark gritty world where cyborgs rule and everyone is plugged into their machines, and the corporations own everything, including those neat eyes you bought. Sticking “-punk” on a term implies to me an unrealistic cultural phenomenon in which everyone adopts a faddish esthetic that they think looks cool, but quickly dies out leaving only a relic population that doesn’t realize how deeply uncool they actually are. Try to live on the bleeding edge, discover that the razor moves on fast leaving you lurking on a crusty blood clot.

So…matrixpunk. One movie comes out in 1999, and everyone is wearing trenchcoats, ooohing at deja vu, and talking about how deep it is that we’re just a simulation (and never mind the losers who are gaga over the red pill/blue pill idea — boy, that one sure drew in a lot of pathetic people). It might have been mind-blowing for a few months a score of years ago, but it’s time to move on and recognize that it’s very silly.

However, one of the core ideas that seems to have suckered in some physicists and philosophers is the simulation crap. As a thought experiment, sure, speculate away…it’s when people get carried away and think it might really, really be true that my hackles rise. Apparently, Sabine Hossenfelder thinks likewise.

According to the simulation hypothesis, everything we experience was coded by an intelligent being, and we are part of that computer code. That we live in some kind of computation in and by itself is not unscientific. For all we currently know, the laws of nature are mathematical, so you could say the universe is really just computing those laws. You may find this terminology a little weird, and I would agree, but it’s not controversial. The controversial bit about the simulation hypothesis is that it assumes there is another level of reality where someone or some thing controls what we believe are the laws of nature, or even interferes with those laws.

The belief in an omniscient being that can interfere with the laws of nature, but for some reason remains hidden from us, is a common element of monotheistic religions. But those who believe in the simulation hypothesis argue they arrived at their belief by reason. The philosopher Nick Boström, for example, claims it’s likely that we live in a computer simulation based on an argument that, in a nutshell, goes like this. If there are a) many civilizations, and these civilizations b) build computers that run simulations of conscious beings, then c) there are many more simulated conscious beings than real ones, so you are likely to live in a simulation.

Elon Musk is among those who have bought into it. He too has said “it’s most likely we’re in a simulation.” And even Neil DeGrasse Tyson gave the simulation hypothesis “better than 50-50 odds” of being correct.

Yeah, it’s a bunch of smart people (and a few hucksters) falling for the hammer-nail appeal. I’ve got a dazzlingly good hammer, or steam engine, or computer, and therefore the world must be made of nails, driven the piston of a very big steam engine, all under the control of a master computer. Or, more familiarly among the crackpots I have to deal with, watches are designed and manufactured, therefore the rabbits on that heath must also have been designed and manufactured. But how do you test your supposition? What would look different if the world did not operate analogously to your familiar technology, but was built on different rules? Why, what would it mean if rabbits lacked a boiler and a gear train in their guts?

Hossenfelder does a fine job of taking the whole idea to task. You should read that, not me, but here’s her conclusion.

And that’s my issue with the simulation hypothesis. Those who believe it make, maybe unknowingly, really big assumptions about what natural laws can be reproduced with computer simulations, and they don’t explain how this is supposed to work. But finding alternative explanations that match all our observations to high precision is really difficult. The simulation hypothesis, therefore, just isn’t a serious scientific argument. This doesn’t mean it’s wrong, but it means you’d have to believe it because you have faith, not because you have logic on your side.

Right. I would add that just because you can calculate the trajectory of an object with a computer doesn’t mean its movement is controlled by a computer. Calculable does not equal calculated. The laws of thermodynamics seem to specify the behavior of atoms, for instance, but that does not imply that there is a computer somewhere chugging away to figure out what that carbon atom ought to do next, and creating virtual instantiations of every particle in the universe.

Also, Nick Bostrom is an ass.

The simulation hypothesis is a bad argument

Maki Naro and Matthew Francis make an interesting argument against the simulation hypothesis, the idea that we’re all constructs living in a super-duper computer program. I don’t believe in that nonsense at all, but I don’t know that I find his argument particularly persuasive: it rests largely on the idea that the simulation hypothesis implies that undesirable consequences must be the product of intent.

farfromideal

Then I look at the crude simulations we currently produce, like, say, Call of Duty, and I’d have to argue that yeah, if we were the creators of a universal simulator, it would be a shithole universe full of helpless innocents and murderous villains, all intended to be targets of a small number of privileged a-holes with superpowers, and I think that is kind of in alignment with what we see in this world.

I’d also worry about where that argument would lead: to the idea that obviously the wealthy and well-off are the player characters for whom the world was made, while being poor and sick and helpless clearly marks one as an NPC, with no real agency and only the simulated appearance of being a ‘real’ person.

What I find the more useful argument is to go back to the beginning of Naro’s comic, where he quotes Elon Musk:

whatswrongwitharg

That is the wrong question. He asserts The odds we’re in base reality is one in billions. Instead we should ask, “what simulated ass did you pull those odds out of?”, because he’s got no rational justification for that claim. We could just as well claim that since we can imagine billions of gods, the odds that we evolved by way of natural mechanisms, rather than some divine fiat, is one in billions. It’s simply faulty reasoning. The responsibility does not lie on me to show why his fantasy is false, it’s on him and Nick Bostrom to demonstrate some actual evidence that it is true.

Then, of course, there’s some babbling about how if the simulation hypothesis is true, we should look for glitches in the matrix, little examples deep inside physics where we detect violations of natural law. This is exactly backwards. First you find observations that don’t fit predictions from existing theory, then you develop alternative theories to accommodate those observations — you don’t first invent an unfounded hypothesis and demand expensive, difficult, unlikely-to-succeed experiments to justify it. Especially since the simulation hypothesis is infinitely flexible and can be contorted to fit any observation made. Is there anything the promoters of this bullshit can imagine that would disprove their hypothesis? That’s what they ought to be discussing, rather than how they can twist quantum physics to support their model.

Then there’s this:

simordie

While being completely unable to imagine any test of their idea, and building it entirely on a framework of speculation, they still lock themselves into a bogus binary: civilizations will either be able to simulate a universe, or they’ll go extinct. Seriously, dude? You’re living in a non-extinct civilization that can’t simulate a universe, and you can’t imagine any other alternatives?

I also have to point out that all civilizations and species will ultimately go extinct, so this argument is basically between an inevitable and unavoidable (if undesirable) outcome, and accepting your personal, idiosyncratic, weird notion. No problem.

“You should hope that I’m right, because either we’re going to build a chrysalis made of the skins of kitty cats and puppy dogs and metamorphose into angelic beings of pure light, or you’re going to die someday.” I don’t like it, but we’re all going to die someday, and going on a rampage and slaughtering kittens and puppies is not a logical alternative at all.

One thing Naro’s comic does illustrate well, though, is the elitist psychology of tech billionaires.

Silicon Valley creationists

There’s a wave of irrationality sweeping through the over-privileged, ridiculously wealthy world of coddled millionaires and billionaires of Silicon Valley. Some of them seem to think The Matrix was a documentary, and that we’re code living in a simulation, so they like to get together and wank over this idea.

That we might be in a simulation is, Terrile argues, a simpler explanation for our existence than the idea that we are the first generation to rise up from primordial ooze and evolve into molecules, biology and eventually intelligence and self-awareness. The simulation hypothesis also accounts for peculiarities in quantum mechanics, particularly the measurement problem, whereby things only become defined when they are observed.

No, that makes no sense. It exhibits a lack of awareness of modern biology and chemistry; “primordial ooze” is a 19th century hypothesis that did not pan out and is not accepted anymore. This guy is ignorant of what would have to be simulated, and thinks that if we were just created with the appearance of having evolved, he wouldn’t have to understand biochemistry, therefore it would be simpler for him.

And where have I seen that “created with the appearance of X” phrase before?

If we are simulated, it doesn’t make the problems go away. This would have to be such a complete simulation that it includes all of physics and chemistry and biology; that models quantum chemistry and the mechanics of all the chemical reactions that produced us; that includes viruses and bacteria, and includes all the evolutionary intermediates; that has such a rich back story that it would be easier to have it evolve procedurally than to have some magic meta-universe coder generate it as some kind of arbitrary catalog. It just doesn’t work. It definitely isn’t a simpler explanation — because it would require all of the complexity of the universe plus an invisible layer of conscious entities running the whole show.

I’ve also heard that phrase that “creation is a simpler explanation than evolution” somewhere before.

I hesitate to say this because I’m no physicist myself, but I don’t think this Terrile fellow understands physics any better than I do, either. The observer effect does not imply a conscious, intelligent, aware observer, as he claims. The observer effect does not mean that there had to be some super-programmer watching over every physical process in order for it to occur.

I don’t think these yahoos even understand what a simulation is.

According to this week’s New Yorker profile of Y Combinator venture capitalist Sam Altman, there are two tech billionaires secretly engaging scientists to work on breaking us out of the simulation.

I think there must be some scientists somewhere who are milking a couple of gullible billionaires out of their cash.

This makes no sense. If we are, for instance, code programmed to respond to simulated stimuli and emit simulated signals into an artificial environment, how can you even talk about “breaking us out”? We are the simulation. Somehow disrupting the model is disrupting us.

If you don’t think this sounds like febrile religious crapola, let’s let Rich Terrile speak some more:

For Terrile, the simulation hypothesis has “beautiful and profound” implications.

First, it provides a scientific basis for some kind of afterlife or larger domain of reality above our world. “You don’t need a miracle, faith or anything special to believe it. It comes naturally out of the laws of physics,” he said.

Second, it means we will soon have the same ability to create our own simulations.

“We will have the power of mind and matter to be able to create whatever we want and occupy those worlds.”

I’ve written some simulations myself — I have some code lying around somewhere that models the interactions between a network of growth cones. We already have the ability to create our own simulations! These guys are all gaga over increasingly complex video games; those are simulations, too.

The NPCs in World of Warcraft do not have rich inner lives and immortality. They do not have an ‘afterlife’ when I switch off the computer. My growth cone models are not finding meaning in their activities because they are expressions of a higher domain of reality.

I, however, am wondering why the Great Programmer in the Sky filled my virtual reality with so many delusional idiots and oblivious loons. The NPCs in this universe are incredibly stupid.

On Source Code and the ethics of the modern technological era

[I am totally confused. I have not seen the movie Source Code, although it will be playing in Morris next week, yet I have now seen an explanation of the time-travel paradox in the movie by the physicist James Kakalios, and now here is an explanation by an English professor. You guys sort it out. I’m not going to try to read either of them carefully, until I see the movie. Which is already giving me a headache.–pzm]

“On Source Code and the ethics of the modern technological era”

By Brendan Riley

Spoiler Alert: this essay assumes you’ve seen Source Code or don’t mind having the plot revealed.

“Make Every Second Count.” “What Would You Do If You Knew You Only Had A Minute To Live?” These purport to be the dramatic underpinning of the Jake Gyllenhaal thriller, Source Code. But underneath the big-studio whiz-bang lies a story teasing out several ethical questions that haunt the technology we’re just now inventing. The film follows Colter Stevens, an Army pilot who finds himself on a doomed train in someone else’s body with only eight minutes to find and stop the mad bomber. After only a brief respite to speak with his superiors, he goes back and tries again, and again, and again. It’s 12 Monkeys and Quantum Leap meet Groundhog Day, without the piano lessons. Source Code uses a relatively familiar gimmick to tell an exciting story, but under the explosions and Gyllenhaalian studliness, it also prods us to think a bit more about how we should grapple with the new possibilities of the modern era.

[Read more…]

Step away from that ladder

We’ve often heard this claim from creationists: “there is no way for genetics to cause an increase in complexity without a designer!”. A recent example has been Michael Egnor’s obtuse caterwauling about it. We, including myself, usually respond in the same way: of course it can. And then we list examples of observations that support the obviously true conclusion that you can get increases in genetic information over time: we talk about gene duplication, gene families, pseudogenes, etc., all well-documented manifestations of natural processes that increase the genetic content of the organism. It happens, it’s clear and simple, get over it, creationists.

Maybe we’ve been missing the point all along, though. The premise of that question from the creationists is what they consider a self-evident fact: that evolution posits a steady increase in complexity from bacteria to Homo sapiens, the deep-rooted idea of the scala natura, a ladder of complexity from simple to complex. Their argument is that the ladder cannot be climbed, and our response is usually, “sure it can, watch!” when perhaps a better answer, one that is even more damaging to their ideology, is that there is no ladder to climb.

That’s a tougher answer to explain, though, and what makes it even more difficult is that there is a long scientific tradition of pretending the ladder is there. Larry Moran has an excellent article on this problem (Alex has a different perspective), and I want to expand on it a little more.

[Read more…]