Pro-natalists, long-termists, the Church of the Future Police…what a nightmare

Malcolm & Simone Collins, a pair of grinning fascist gobshites

Look to the right. If you ever wanted to see a pair of gold-plated smirking morons, there they are. Those two, Malcolm and Simone Collins, were cooked in the crucible of weird Silicon Valley culture — he was a manager at Google, while she worked for Peter Thiel — and they came up with a techno-fetishist cult built on their misunderstanding of science. It’s a horror story.

Googleplex, the Google HQ in Mountain View, California, is an incubator for new religious movements. Ten years ago, Google hired Ray Kurzweil, prophet of the Singularity. In 2015, Google engineer Anthony Levandowski started The Way of the Future, a church to worship super-intelligent AI. And now we have the Religion of the Future Police, began by former Google manager Malcolm Collins and his wife Simone.

The Collinses came to my attention last month, thanks to a great article by Julia Black in Business Insider, called ‘Can Super Babies Save the World’. They’re the founders of pronatalist.org, and part of a pronatalist movement growing in Silicon Valley and around the world. Pronatalists think the world is facing a population crisis: not too many people, but too few. They think civilization is threatened because of lowering birth rates, ageing populations and plummeting male fertility (average sperm count fell 50% between 1973 and 2019, and no one is sure why).

OK, I might be persuaded that the carrying capacity of the planet might be somewhat greater than the current number of 7.9 billion, but only if there was a more equitable division of wealth — the rich are resource hogs — and if we made a concerted effort to develop more sustainable technologies. Somehow, I don’t think a pair of smug Silicon Valley smegholes would go along with that. I also think that all you have to do is examine the dismal prospects of climate change and environmental decline to see we can’t sustain the current population, so blithely suggesting we can just increase it more without consequences is insane.

Also, can I just say that naming your cult The Religion of Future Police sounds rather fascist?

These people are basically long-termists who are open about one of their goals. They aren’t just trying to expand humanity as a whole, they specifically want their own personal lineage to take over the world.

as long as each of their descendants can commit to having at least eight children for just 11 generations, the Collins bloodline will eventually outnumber the current human population. If they succeed, Malcolm continued, ‘we could set the future of our species’.

Their math is sort of right – 8 kids per child per generation for 11 generations is 8,589,934,592, or 8½ billion descendants. Imagine pressuring your children with the requirement that they must have at least 8 kids each! Unfortunately, they don’t carry through on the calculation. Since each generation after the first is only going to be half Collins (unless they’re also going to encourage incest), the individuals in that last generation are only going to be 0.0098% Collins, assuming there’s no interbreeding at all, which is unlikely given they’ll constitute a population of over 8 billion people. It’s a silly and innumerate endeavor. 99% of that 11th generation are going to be derived from genetic contributions from all the other people on Earth, and with any luck they’ll completely dilute out the taint of the Collins Family insanity.

They’re also open about trying to ‘improve’ the genetics of their children with crude engineering. Very crude. They don’t know what they’re doing at all.

They’re making children through IVF, which produces as many as a dozen fertilized eggs in a dish which can then be implanted back into Simone. A cell can be taken from each developing embryo and subjected to sequence analysis, and then they pore over the list of alleles and pick the very best super-baby combination. Or they think they do. We can’t do the kind of prediction of traits from raw genomic data that they are imagining.

Probably the most controversial part of their plan is their embrace of genetic enhancement for their children, something which they say is a secret pursuit among the tech rich. ‘We are the Underground Railroad of ‘Gattaca’ babies and people who want to do genetic stuff with their kids,’ Malcolm said. They used a company called Genomic Prediction, started by physicist Steve Hsu, which offers polygenic risk scores on embryos. Julia Black writes:

Though Genomic Prediction’s “LifeView” test officially offers risk scores only for 11 polygenic disorders — including schizophrenia and five types of cancer — they allowed the Collinses to access the raw genetic data for their own analysis. Simone and Malcolm then took their data export to a company called SelfDecode, which typically runs tests on adult DNA samples, to analyze what the Collinses called “the fun stuff.”

Sitting on the couch, Simone pulled up a spreadsheet filled with red and green numbers. Each row represented one of their embryos from the sixth batch, and the columns a variety of relative risk factors, from obesity to heart disease to headaches. The Collinses’ top priority was one of the most disputed categories: what they called “mental-performance-adjacent traits,” including stress, chronically low mood, brain fog, mood swings, fatigue, anxiety, and ADHD. With a large number of green columns and a score of 1.9, Embryo №3 — aka Titan Invictus (an experiment in nominative determinism) — was selected to become the Collinses’ third child.

Oh god, Stephen Hsu? they’re taking genetic advice from a racist physicist? All the traits they think they are selecting for are complex polygenic behavioral phenomena, products of currently uninterpretable combinatorial interactions. They think they’re being rational and logical by making choices based on numerical scores, but it’s all garbage in, garbage out.

Cocky little ignoramuses, aren’t they? Just the sort to base their life choices on a religion.

I wondered how they were paying for all this gee-whiz techno pseudo-science? Easy. They’re running a religious grift.

Today they live in a farmhouse in Philadelphia with three children and a fourth on the way. They’re launching a VC fund and accepting enrollments for The Collins Institute School for the Gifted, a $20,000-a-year course in homeschooling which teaches students math, coding, how to pitch, how to run successful email campaigns, and other life-skills. They’re also running a match-making service for alpha adults, and they’ve launched their own religion with an elaborate theology described in a GoogleDoc.

They’re selling a $20,000 course in homeschooling! You know, sending your kids to a public school is a better investment — they’ll get qualified teachers who are regularly assessed, and a curriculum set by state standards. I know, sometimes public schools can be awful for many kids, but it’s not because they lack a good framework. It’s because other people can be assholes.

A homeschool run by those two arrogant know-nothings, though, is guaranteed to have an enriched population of privileged assholes.

Their status as confirmed assholes can be determined by reading their Collins Family Theology document. It’s a turgid, pretentious mess that makes sweeping pronouncements about human nature, bolstered by a few citations to short science articles which I can tell you, he uses inappropriately. Malcolm Collins has a painfully linear and determinist view of genetics. For instance,

Our culture also resists instinctual attachment to biological identity, instead contextualizing children as more “us” than we—our present biological bundles—are. Consider that each biological kid you have is 50% you. As soon as you have more than three kids, there is more of your biological identity (1.5X) in them than there is in you.

By coincidence, I happen to have three kids. That does not mean 1.5 copies of me exist — each one is a unique combinations of genes and experience. You cannot quantify “biological identity” in that simple-minded way!

What they’re doing is building a relabeled version of eugenics, based on the same conceptual errors as the original eugenics. They’re making the same horrific categorizations that the Nazis did. If you don’t accept their views, then you’re a husk — something non-human.

They call their religion ‘secular Calvinism’ — interestingly, the scientist JBS Haldane called eugenics ‘scientific Calvinism’ in the 1920s. They believe the ultimate good in the universe is ‘sapience’. More humans = more sapience. More educated and more free-thinking humans = even more sapience. Intelligent, free-thinking humans are better, according to this theology, than conformist dull-witted herd-humans, or what the Collinses call ‘husks’:

we call them a “husk” because when someone halts the process of creative destruction — refusing to explore, weigh, and sometimes to accept new ideas — they stop being meaningfully human (in our House’s view, at least).

When eugenicists say that people who think differently to them are ‘husks’ who have ‘stopped being meaningfully human’, that’s a red flag folks!

To make it a little bit worse, their kids are taught to idolize the Future Police, an imagined population in the far distant future who are looking back and judging them for how well they assist their destiny in coming to be.

Future police as a family tradition are also very useful in conveying more complex concepts exemplifying our Secular Calvinist cultural framework (such as predestination, the future that must come to pass, and the Elect) in ways that a child can easily understand. For example, it is easy to explain to a kid why the Future Police have no motivation to protect an individual who lives only for themselves or their immediate community instead of the future of the species and their family. The concept of Future Police can be used to teach kids to constantly consider how their actions impact humanity in both the near and distant future.

Future Police also allow for fun family holiday traditions. For example, at the beginning of each year, our family has a celebration in which we combine common New Year’s traditions (such as making commitments to the future) with Future Police motifs, encouraging our kids to “prove their dedication to the future” to these distant descendents in order to curry their favor and secure gifts and privileges.

“Fun.” And then they all join in a rousing chorus of Tomorrow Belongs to Me. I call it terrifying children with threats of the Future Police judging them for failing to curry favor. This is just the same old fucked-up Christian guilt-trip.

Sure, Googleplex is an incubator for new religious movements, but they’re all loony as hell, all seem to converge on the same ol’ authoritarian cultishness, and I hope they all die and fade away.

2:52

Émile P. Torres just pointed out the existence of this 5 year old video.

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.

It’s 10 right-leaning white men dressed in black suits who have a history of stirring up fear to their own profit (or, in the case of Tallinn for instance, dismissing credible concerns about climate change for his own profit) clumsily sharing too few microphones to make up some science fiction shit. The panel is titled Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds. I’m done already after seeing the title and lineup, but I’ve always wanted to witness hell, so I watched a little of it. Very little of it.

I made it to the 2:52 mark before I said, “aww, hell no, fuck this” and bailed out. Years of dealing with creationists has given me a high tolerance for bad bullshit, but this was too much for me. How far can you get?

I guess I just can’t be happy with bad data

I used to be a fan of Steven Pinker’s work. He speaks fluent academese, he just sounds so reasonable, and his message of optimism is something I want to be true. I’d love to be able to go to my grave thinking the world was going to be a better place for my grandchildren and great-grandchildren and all the children of the world. I wanted to believe.

O sweet irony, that an atheist could be tempted by hope and faith.

But as I read more, I became disenchanted. Hope is great, but it has to be backed by reason and evidence, and as I read more, it became obvious that Pinker is kind of the Norman Vincent Peale of atheism, and that there wasn’t any substance to him — he starts with a happy belief and works to fill in the gaps in the evidence with cherry-picked data and his own indefensible interpretations.

So now he’s written a book about the Enlightenment, reviewed by Peter Harrison. It is not a good review.

The Enlightenment may seem an ambitious topic for a cognitive psychologist to take up from scratch. Numerous historians have dedicated entire careers to it, and there remains a considerable diversity of opinion about what it was and what its impact has been. But from this and previous work we get intimations of why Pinker thinks he is the person for the job. Historians have laboured under the misapprehension that the key figures of the Enlightenment were mostly philosophers of one stripe or another. Pinker has made the anachronistic determination that, in fact, they were all really scientists – indeed, “cognitive neuroscientists” and “evolutionary psychologists.”

In short, he thinks that they are people like him and that he is thus possessed of privileged insights into their thought denied to mere historians. The latter must resort to careful reading and fraught interpretation in lieu of being able directly to channel what Enlightenment thinkers really thought.

Uh-oh. This reminds me of that ghastly essay Pinker wrote that made me recoil in horror, it was so bad, so egocentric, so ignorant of the humanities and social sciences, I bet it was the foundation of his new book. The book that gets this summary:

For the sceptical reader the whole strategy of the book looks like this. Take a highly selective, historically contentious and anachronistic view of the Enlightenment. Don’t be too scrupulous in surveying the range of positions held by Enlightenment thinkers – just attribute your own views to them all. Find a great many things that happened after the Enlightenment that you really like. Illustrate these with graphs. Repeat. Attribute all these good things your version of the Enlightenment. Conclude that we should emulate this Enlightenment if we want the trend lines to keep heading in the right direction. If challenged at any point, do not mount a counter-argument that appeals to actual history, but choose one of the following labels for your critic: religious reactionary, delusional romantic, relativist, postmodernist, paid up member of the Foucault fan club.

For their part, historians have found the task of tracing the legacy of the Enlightenment more difficult, not least because even characterising what the Enlightenment was has proven challenging. It is now commonplace to speak of multiple Enlightenments and hence multiple and sometime conflicting legacies. Obviously, moreover, not everything that came after the Enlightenment has been sweetness and, well, light. Edmund Burke and G.W.F. Hegel, for example, drew direct connexions between the French Enlightenment and the reign of terror. In the twentieth century the German-Jewish philosophers Theodor Adorno and Max Horkheimer described what they called “the dialectic of the Enlightenment” – a mixed inheritance that included the technical mastery of nature along with a conspicuous absence of the moral insights that would prevent that mastery being turned to barbarous ends. In their view, this led ultimately to the horrors of Nazism.

That bit about picking things you like and stuffing them into graphs reminds me of someone else: maybe Pinker is actually the hybridized clone of Norman Vincent Peale and Ray Kurzweil.

I think, to be a good honest atheist and scientist, I have to respect the work of philosophers and historians and all those people who have deep domains of expertise that I lack, and recognize that when people who say things I wish were true, yet disrespect and don’t even acknowledge the historical breadth of humanity’s thought, they are probably full of shit. Or at least the living personification of the Alexander Pope poem:

A little learning is a dang’rous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again.

A little humility would help, and you don’t approach the Pierian spring with a sippy straw.

A damn good critique of Charles Murray’s awful oeuvre

When many of us criticize Charles Murray, we tend to focus on his unwarranted extrapolations from correlations; it’s easy to get caught up in the details and point out esoteric statistical flaws that take an advanced degree to be able to understand, and are even more challenging to explain. It’s also easy for the other side to trot out “experts” who are good at burying you in yet more statistical bafflegab to muddy the waters. Nathan J. Robinson makes a 180° turnabout to explain why Charles Murray is odious, and maybe goes a little too far to pardon the bad science, but does refocus our attention on the real problem, that his argument is fundamentally a racist argument, built on racist assumptions, and it can’t be reformed by more clever statistics.

Robinson drills right down to the core of Murray’s book, and highlights what we should find far more offensive than an abuse of abstract statistical calculations. He distills The Bell Curve down to these three premises.

  1. Black people tend to be dumber than white people, which is probably partly why white people tend to have more money than black people. This is likely to be partly because of genetics, a question that would be valid and useful to investigate.
  2. Black cultural achievements are almost negligible. Western peoples have a superior tendency toward creating “objectively” more “excellent” art and music. Differences in cultural excellence across groups might also have biological roots.
  3. We should return to the conception of equality held by the Founding Fathers, who thought black people were subhumans. A situation in which white people are politically and economically dominant over black people is natural and acceptable.

He backs up these summaries with quotes from Murray and Herrnstein, too, and criticizes critics.

Murray’s opponents occasionally trip up, by arguing against the reality of the difference in test scores rather than against Murray’s formulation of the concept of intelligence. The dubious aspect of The Bell Curve‘s intelligence framework is not that it argues there are ethnic differences in IQ scores, which plenty of sociologists acknowledge. It is that Murray and Herrnstein use IQ, an arbitrary test of a particular set of abilities (arbitrary in the sense that there is no reason why a person’s IQ should matter any more than their eye color, not in the sense that it is uncorrelated with economic outcomes) as a measure of whether someone is smart or dumb in the ordinary language sense. It isn’t, though: the number of high-IQ idiots in our society is staggering. Now, Murray and Herrnstein say that “intelligence” is “just a noun, not an accolade,” generally using the phrase “cognitive ability” in the book as a synonym for “intelligent” or “smart.” But because they say explicitly (1) that “IQ,” “intelligent,” and “smart” mean the same thing, (2) that “smart” can be contrasted with “dumb,” and (3) the ethnic difference in IQ scores means an ethnic difference in intelligence/smartness, it is hard to see how the book can be seen as arguing anything other than that black people tend to be dumber than white people, and Murray and Herrnstein should not have been surprised that their “black people are dumb” book landed them in hot water. (“We didn’t sat ‘dumb’! We just said dumber! And only on average! And through most of the book we said ‘lacking cognitive ability’ rather than ‘dumb’!”)

I have to admit, I’m guilty. When one of these wankers pops up to triumphantly announce that these test scores show that black people are inferior, I tend to reflexively focus on the interpretation of test scores and the overloaded concept of IQ and the unwarranted expansion of a number to dismiss people, when maybe, if I were more the target of such claims, I would be more likely to take offense at the part where he’s saying these human beings are ‘lacking in cognitive ability’, or whatever other euphemism they’re using today.

The problem isn’t that Murray got the math wrong (although bad assumptions make for bad math). The problem is that he abuses math to justify prior racist beliefs, exaggerating minor variations in measurements of arbitrary population groups to warrant bigotry against certain subsets. That ought to be the heart of our objection, that he attaches strong value judgments to numbers he has fished out of a great pool of complexity.

In part, too, the objection ought to be because somehow, his numbers tend to conveniently support existing racist biases in our society. But he consistently twists the interpretations to prop up ideas that would have been welcomed in the antebellum South.

We should be clear on why the Murray-Herrnstein argument was both morally offensive and poor social science. If they had stuck to what is ostensibly the core claim of the book, that IQ (whatever it is) is strongly correlated with one’s economic status, there would have been nothing objectionable about their work. In fact, it would even have been (as Murray himself has pointed out) totally consistent with a left-wing worldview. “IQ predicts economic outcomes” just means “some particular set of mental abilities happen to be well-adapted for doing the things that make you successful in contemporary U.S. capitalist society.” Testing for IQ is no different from testing whether someone can play the guitar or do 1000 jumping jacks or lick their elbow. And “the people who can do those certain valued things are forming a narrow elite at the expense of the underclass” is a conclusion left-wing people would be happy to entertain. After all, it’s no different than saying “people who have the good fortune to be skilled at finance are making a lot of money and thereby exacerbating inequality.” Noam Chomsky goes further and suggests that if we actually managed to determine the traits that predicted success under capitalism, more relevant than “intelligence” would probably be “some combination of greed, cynicism, obsequiousness and subordination, lack of curiosity and independence of mind, self-serving disregard for others, and who knows what else.”

I also learned something new. I read The Bell Curve years ago when it first came out, and it did effectively turn me away from ever wanting to hear another word from Charles Murray. But he has written other books! He also wrote Human Accomplishment: The Pursuit of Excellence in the Arts and Sciences, 800 B.C. to 1950, which Robinson turns to to further reveal Murray’s implicit bigotry.

Human Accomplishment is one of the most absurd works of “social science” ever produced. If you want evidence proving Murray a “pseudoscientist,” it is Human Accomplishment rather than The Bell Curve that you should turn to. In it, he attempts to prove using statistics which cultures are objectively the most “excellent” and “accomplished,” demonstrating mathematically the inherent superiority of Western thought throughout the arts and sciences.

Oh god. I can tell what’s coming. Pages and pages of cherry-picking, oodles of selection bias that Murray will use to complain of cultural trends when all his elaborate statistics do is take the measure of the slant of his own brain. Pseudoscientists do this all the time; another example would be Ray Kurzweil, who has done a survey of history in which he selects which bits he wants to plot to support his claim of accelerating technological progress leading to his much-desired Singularity. Murray does the same thing to “prove” his prior assumption that black people “lack cognitive ability”.

How does he do this? By counting “significant” people. (First rule of pseudoscientists: turn your biases into numbers. That way, if anyone disagrees, you can accuse them of being anti-math.)

Murray purports to show that Europeans have produced the most “significant” people in literature, philosophy, art, music, and the sciences, and then posits some theories as to what makes cultures able to produce better versus worse things. The problem that immediately arises, of course, is that there is no actual objective way of determining a person’s “significance.” In order to provide such an “objective” measure, Murray uses (I am not kidding you) the frequency of people’s appearances in encyclopedias and biographical dictionaries. In this way, he says, he has shown their “eminence,” therefore objectively shown their accomplishments in their respective fields. And by then showing which cultures they came from, he can rank each culture by its cultural and scientific worth.

Then it just gets hilariously bad. Murray decides to enumerate accomplishment in music, of all things, by first dismissing everything produced since 1950 (the last half century has failed to produce “an abundance of timeless work”, don’t you know), and then, in his list of great musical accomplishment, does not include any black composers, except Duke Ellington. Robinson provides a brutal takedown.

Before 1950, black people had invented gospel, blues, jazz, R&B, samba, meringue, ragtime, zydeco, mento, calypso, and bomba. During the early 20th century, in the United States alone, the following composers and players were active: Ma Rainey, W.C. Handy, Scott Joplin, Louis Armstrong, Jelly Roll Morton, James P. Johnson, Fats Waller, Count Basie, Cab Calloway, Art Tatum, Charlie Parker, Charles Mingus, Lil Hardin Armstrong, Bessie Smith, Billie Holliday, Sister Rosetta Tharpe, Mahalia Jackson, J. Rosamond Johnson, Ella Fitzgerald, John Lee Hooker, Coleman Hawkins, Leadbelly, Earl Hines, Dizzy Gillespie, Miles Davis, Fats Navarro, Roy Brown, Wynonie Harris, Blind Lemon Jefferson, Blind Willie Johnson, Robert Johnson, Son House, Dinah Washington, Thelonious Monk, Muddy Waters, Art Blakey, Sarah Vaughan, Memphis Slim, Skip James, Louis Jordan, Ruth Brown, Big Jay McNeely, Paul Gayten, and Professor Longhair. (This list is partial.) When we talk about black American music of the early 20th century, we are talking about one of the most astonishing periods of cultural accomplishment in the history of civilization. We are talking about an unparalleled record of invention, the creation of some of the most transcendently moving and original artistic material that has yet emerged from the human mind. The significance of this achievement cannot be overstated. What’s more, it occurred without state sponsorship or the patronage of elites. In fact, it arose organically under conditions of brutal Jim Crow segregation and discrimination, in which black people had access to almost no mainstream institutions or material resources.

Jesus. This ought to be the approach we always take to Charles Murray: not that his calculations and statistics are a bit iffy, but that he can take a look at the music of the 20th century and somehow argue that contributions by the black community were inferior and not even worth mentioning. His biases are screamingly loud.

Unfortunately, while I suffered through The Bell Curve, this is so outrageously stupid that I’m not at all tempted to read Human Accomplishment, and I’m a guy who reads creationist literature to expose its flaws. Murray is more repulsive than even Kent Hovind (Hovind should not take that as an accolade, since that’s an awfully low bar.)

What happened to 2029?

Ray Kurzweil has been consistent over the years: he has these contrived graphs full of fudged data that tell him that The Singularity will arrive in 2029. 2029 is the magic date. We all just have to hang in there for 12 more years and then presto, immortality, incomprehensible wisdom, the human race rises to a new plane of existence.

Except…

2029 is getting kind of close. The Fudgening has begun!

The new date is 2045. No Rapture of the Nerds until I’m 88 years old. So disappoint.

Kurzweil continues to share his visions for the future, and his latest prediction was made at the most recent SXSW Conference, where he claimed that the Singularity – the moment when technology becomes smarter than humans – will happen by 2045.

Typical. You’ve got a specific prediction, you can see that it’s not coming true, so you start adjusting the details, maybe you change your mind on a few things (but it’s OK if you do it in advance, that way it doesn’t count against you), and you do everything you can to keep your accuracy score up, to fool the gullible.

Yeah, he’s got a score. 86%.

With a little wiggle room given to the timelines the author, inventor, computer scientist, futurist, and director of engineering at Google provides, a full 86 percent of his predictions – including the fall of the Soviet Union, the growth of the internet, and the ability of computers to beat humans at chess – have come to fruition.

Do any of those things count as surprising predictions in any way? They all sound rather mundane to me. The world is going to get warmer, there will be wars, we’ll have substantial economic ups and downs, some famous people will die, some notorious regimes will collapse, oceans rise, empires fall. Generalities do not impress me as indicative of deep insight.

Furthermore, that number is suspicious: you wouldn’t want to say 100%, because nobody would believe that. And you don’t want to say anything near 50%, because that sounds too close to chance. So you pick a number in between…say, somewhere between 75% and 90%. Wait, where did I get that range? That’s what psychics claim.

So, how accurate are psychics on an average? There are very few psychics who are 99% accurate in their predictions. The range in accuracy for the majority of real psychic readings are between 75% and 90%.

He’s using the standard tricks of the con man, ones that skeptics are supposed to be able to recognize and deal with. So how has Kurzweil managed to bamboozle so many people in the tech community?

I’m going to guess that being predisposed to libertarian fantasies and being blinded by your own privilege tends not to make one very skeptical or self-aware. Either that, or Kurzweil is very, very good at fooling people. I’m going to go with the former.

Finally! A perspective on AI I can agree with!

This Kevin Kelly dude has written a summary that I find fully compatible with the biology. Read the whole thing — — it’s long, but it starts with a short summary that is easily digested.

Here are the orthodox, and flawed, premises of a lot of AI speculation.

  1. Artificial intelligence is already getting smarter than us, at an exponential rate.
  2. We’ll make AIs into a general purpose intelligence, like our own.
  3. We can make human intelligence in silicon.
  4. Intelligence can be expanded without limit.
  5. Once we have exploding superintelligence it can solve most of our problems.

That’s an accurate summary of the typical tech dudebro. Read a Ray Kurzweil book; check out the YouTube chatter about AI; look at where venture capital money is going; read some SF or watch a movie about AI. These really are the default assumptions that allow people to think AI is a terrible threat that is simultaneously going to lead to the Singularity and SkyNet. I think (hope) that most real AI researchers aren’t sunk into this nonsense, and are probably more aware of the genuine concerns and limitations of the field, just as most biologists roll their eyes at the magic molecular biology we see portrayed on TV.

And here are Kelly’s summary rebuttals:

  1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  2. Humans do not have general purpose minds, and neither will AIs.
  3. Emulation of human thinking in other media will be constrained by cost.
  4. Dimensions of intelligence are not infinite.
  5. Intelligences are only one factor in progress.

My own comments:

  1. The whole concept of IQ is a crime against humanity. It may have once been an interesting, tentative hypothesis (although even in the beginning it was a tool to demean people who weren’t exactly like English/American psychometricians), but it has long outlived its utility and now is only a blunt instrument to hammer people into a simple linear mold. It’s also even more popular with racists nowadays.

  2. The funny thing about this point is that the same people who think IQ is the bee’s knees also think that a huge inventory of attitudes and abilities and potential is hard-coded into us. Their idea of humanity is inflexible and the opposite of general purpose.

  3. Yeah, why? Why would we want a computer that can fall in love, get angry, crave chocolate donuts, have hobbies? We’d have to intentionally shape the computer mind to have similar predilections to the minds of apes with sloppy chemistry. This might be an interesting but entirely non-trivial exercise for computer scientists, but how are you going to get it to pay for itself?

  4. One species on earth has human-like intelligence, and it took 4 billion years (or 500 million, if you’d rather start the clock at the emergence of complex multicellular life) of evolution to get here. Even in our lineage the increase hasn’t been linear, but in short, infrequent steps. Either intelligence beyond a certain point confers no particular advantage, or increasing intelligence is more difficult and has a lot of tradeoffs.

  5. Ah, the ideal of the Vulcan Spock. A lot of people — including a painfully large fraction of the atheist population — have this idea that the best role model is someone emotionless and robot-like, with a calculator-like intelligence. If only we could all weigh all the variables, we’d all come up with the same answer, because values and emotions are never part of the equation.

It’s a longish article at 5,000 words, but in comparison to that 40,000 word abomination on AI from WaitButWhy it’s a reasonable read and most importantly and in contrast, it’s actually right.

The gospel according to St Ray

Deja vu, man. Transhumanism is just Christian theology retranslated. An ex-Christian writes about her easy transition from dropping out of Bible school to adopting Ray Kurzweil’s “bible”, The Age of Spiritual Machines.

Many transhumanists such as Kurzweil contend that they are carrying on the legacy of the Enlightenment – that theirs is a philosophy grounded in reason and empiricism, even if they do lapse occasionally into metaphysical language about “transcendence” and “eternal life”. As I read more about the movement, I learned that most transhumanists are atheists who, if they engage at all with monotheistic faith, defer to the familiar antagonisms between science and religion. “The greatest threat to humanity’s continuing evolution,” writes the transhumanist Simon Young, “is theistic opposition to Superbiology in the name of a belief system based on blind faith in the absence of evidence.”

Yet although few transhumanists would likely admit it, their theories about the future are a secular outgrowth of Christian eschatology. The word transhuman first appeared not in a work of science or technology but in Henry Francis Carey’s 1814 translation of Dante’s Paradiso, the final book of the Divine Comedy. Dante has completed his journey through paradise and is ascending into the spheres of heaven when his human flesh is suddenly transformed. He is vague about the nature of his new body. “Words may not tell of that transhuman change,” he writes.

I’ve never trusted transhumanism. There’s a grain of truth to it — we will change over time, and technology is a force in our lives — but there’s this weird element of dogmatism where they insist that they have seen the future and it will happen just so and if you don’t believe in the Singularity you are anti-science. Or if you don’t believe in Superbiology, whatever the hell that is.

Anyway, read the whole thing. I’m currently at a conference at HHMI, and we’re shortly going to get together to talk about real biology. I don’t think the super kind is going to be anywhere on the agenda.

More money than sense

Take one terrible NY Times pundit who lives on an alien planet of her own, and toss her into the esoteric hothouse world of Silicon Valley, and all you’re going to get is a hot mess, a weird dive into the delusions of very rich smart people with no reality brakes to check out the truth. Maureen Dowd talks to Elon Musk and other pretentious luminaries. It’s painful if you prioritize critical thinking.

They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

In a world overpopulated with billions of people, where climate change is a looming threat, where all those people are a petri dish for cultivating new diseases, where the majority live in poverty, where in many places clean water is a struggle to find, where the most militarily powerful nation has just elected an incompetent, narcissistic clown to be in charge, two men sit down to talk. One says the most important project in the world is to put a tiny number of people on a barren rock. The other says the most important project is to create more powerful computers that can think on their own.

And then the two of them start arguing over the threat of artificial intelligences enslaving, or liberating, humanity. These intelligences don’t exist, and may not exist, and will definitely not exist in the form these smart guys are imagining. It is the grown-up, over-paid version of two children arguing over who would win in a fight, Darth Vader or Magneto? The Millenium Falcon or the Starship Enterprise? Jesus or Buddha?

And then Ray Kurzweil shows up.

Fuck me.

Dowd just parrots these absurd conversations and doesn’t offer any critical perspectives, and lord help us, the participants certainly don’t. Can we just lock them all in a well-padded room with an assortment of action figures and tell them to get to work to resolve the most important dispute in the universe, which toy is powerfulest?

Or could we at least have one skeptic in this mess to try and focus the discussions on something real?

If everyone from Yong to Zimmer says it’s true, it must be

You must have already read the tragic news: scientists have determined that I am doomed to die by 2072, when I turn 115, if not sooner. This was figured out by analyzing demographic data and seeing that people seem to hit a ceiling around age 115; the mean life expectancy keeps shifting upwards, but the maximum age seems to have reached a plateau. Carl Zimmer gives the clearest explanation of the methodology behind this conclusion, and Ed Yong gives a good description of the phenomenon of death in the very old.

The ceiling is probably hardwired into our biology. As we grow older, we slowly accumulate damage to our DNA and other molecules, which turns the intricate machinery of our cells into a creaky, dysfunctional mess. In most cases, that decline leads to diseases of old age, like cancer, heart disease, or Alzheimer’s. But if people live past their 80s or 90s, their odds of getting such illnesses actually start to fall—perhaps because they have protective genes. Supercentenarians don’t tend to die of major diseases—Jeanne Calment died of natural causes—and many of them are physically independent even at the end of their lives. But they still die, “simply because too many of their bodily functions fail,” says Vijg. “They can no longer continue to live.”

I agree with all that. I think there is an upper bound to how long meat can keep plodding about on Earth before it reaches a point of critical failure. But I’m going to disagree with Yong on one thing: he goes on to explain it in evolutionary terms, with the standard story that there hasn’t been selection for longevity genes, because all the selection has been for genes for vigor in youth, which may actually have the side effect of accelerating mortality.

This is true, as far as it goes. But I think it’s a different phenomenon, that we’re seeing a physico-chemical limitation that isn’t going to be avoided, no matter how refined and potent ‘longevity genes’ become.

When organized pieces of matter are stressed or experience wear, their level of organization decreases. You simply can’t avoid that. Expose a piece of metal in a car to prolonged periods of vibration and it will eventually fail, not because it was badly designed, but because its nature and the nature of its activity dictates that it will eventually, inevitably break.

Likewise a soap bubble is ephemeral by its nature. The same fluid properties that enable it to be blown doom it — the film will flow over time, it will tend to thin at the top, and eventually it will pop. There’s no way to suspend the physics of a soap bubble to let it last significantly longer, shy of freezing it and defeating the whole point of a soap bubble.

In people, we have a name for this wear and tear and stress: it’s called “living”. All these different things we do that make it worth existing are also fundamentally damaging — there’s no escaping the emergence of an ultimate point of failure.

115 years sounds like a reasonable best estimate from the current evidence. I’d also point out that this does not imply that we won’t find a common critical failure point, and find a way for medical science to push it up a year or five…but every such patch adds another layer of complexity to the system, and represents another potential point of failure. We’re just going to asymptotically approach the upper bound, whatever it is.

That’s OK. I’ll take 115 years. It also helps that it’s going to really piss off Aubrey de Grey and Ray Kurzweil.

Deliver us from the fury of the cyborgs and grant us the peace of cyberspace, O Lord

David Brin reviews some recent books on the future of artificial intelligence. He’s more optimistic than I am. For one, I think most of the AI pundits are little better than glib con men, so any survey of the literature should consist mostly of culling all the garbage. No, really, please don’t bring up Kurzweil again. Also, any obscenely rich Silicon Valley pundit who predicts a glorious future of infinite wealth because technology can just fuck right off.

But there’s also some stuff I agree with. People who authoritatively declare that this is how the future will be, and that is how people will respond to it, are not actually being authoritative, because they won’t be there, but are being authoritarian. We set the wheel rolling, and we hope that we aren’t setting it on a path to future destruction, but we don’t get to dictate to future generations how they should deal with it. To announce that we’ve created a disaster and that our grandchildren will react by creating a dystopian nightmare world sells them short, and pretending that they’ll use the tools we have generously given them to create a glorious bright utopia is stealing all the credit. People will be people. Finger-wagging from the distant past will have zero or negative influence.

Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!

They lectured and chided. They threatened damnation and offered heavenly rewards. Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Judeao-Christian-Muslim laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question:

“How’s that working out for you?”

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators, parasites and abusers –– just as it won’t divert the most malignant machines. Indeed, moralizing often empowers them, offering ways to rationalize exploiting others.

Beyond artificial intelligence, a better example might be climate change — that’s one monstrous juggernaut we’ve set rolling into the future. The very worst thing we can do is start lecturing posterity about how they should deal with it, since we don’t really know all the consequences that are going to arise, and it’s rather presumptuous for us to create the problem, and then tell our grandchildren how they should fix it. It’s better that we set an example and address the problems that emerge now, do our best to minimize foreseeable consequences, and trust the competence of future generations to cope with their situations, as driven by necessities we have created.

They’re probably not going to thank us for any advice, no matter how well-meaning, and are more likely to curse us for our neglect and laziness and exploitation of the environment. If you really care about the welfare of future generations, you’ll do what you can now, not tell them how they’re supposed to be.

The AI literature comes across as extremely silly, too.

What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”

Jesus. We don’t have these “intelligent machines” yet, and may not — I think AI researchers always exaggerate the imminence of their breakthroughs, and the simplicity of intelligence. So this guy is declaring that the big concern of this century, which is already 1/6th over, is an ethical crisis in dealing with non-existent entities? The comparison with religious authorities is even more apt.

I tell you what. Once we figure out how to coexist safely and ethically with our fellow human beings, then you can pontificate on how to coexist safely and ethically with imaginary androids.