A damn good critique of Charles Murray’s awful oeuvre

When many of us criticize Charles Murray, we tend to focus on his unwarranted extrapolations from correlations; it’s easy to get caught up in the details and point out esoteric statistical flaws that take an advanced degree to be able to understand, and are even more challenging to explain. It’s also easy for the other side to trot out “experts” who are good at burying you in yet more statistical bafflegab to muddy the waters. Nathan J. Robinson makes a 180° turnabout to explain why Charles Murray is odious, and maybe goes a little too far to pardon the bad science, but does refocus our attention on the real problem, that his argument is fundamentally a racist argument, built on racist assumptions, and it can’t be reformed by more clever statistics.

Robinson drills right down to the core of Murray’s book, and highlights what we should find far more offensive than an abuse of abstract statistical calculations. He distills The Bell Curve down to these three premises.

  1. Black people tend to be dumber than white people, which is probably partly why white people tend to have more money than black people. This is likely to be partly because of genetics, a question that would be valid and useful to investigate.
  2. Black cultural achievements are almost negligible. Western peoples have a superior tendency toward creating “objectively” more “excellent” art and music. Differences in cultural excellence across groups might also have biological roots.
  3. We should return to the conception of equality held by the Founding Fathers, who thought black people were subhumans. A situation in which white people are politically and economically dominant over black people is natural and acceptable.

He backs up these summaries with quotes from Murray and Herrnstein, too, and criticizes critics.

Murray’s opponents occasionally trip up, by arguing against the reality of the difference in test scores rather than against Murray’s formulation of the concept of intelligence. The dubious aspect of The Bell Curve‘s intelligence framework is not that it argues there are ethnic differences in IQ scores, which plenty of sociologists acknowledge. It is that Murray and Herrnstein use IQ, an arbitrary test of a particular set of abilities (arbitrary in the sense that there is no reason why a person’s IQ should matter any more than their eye color, not in the sense that it is uncorrelated with economic outcomes) as a measure of whether someone is smart or dumb in the ordinary language sense. It isn’t, though: the number of high-IQ idiots in our society is staggering. Now, Murray and Herrnstein say that “intelligence” is “just a noun, not an accolade,” generally using the phrase “cognitive ability” in the book as a synonym for “intelligent” or “smart.” But because they say explicitly (1) that “IQ,” “intelligent,” and “smart” mean the same thing, (2) that “smart” can be contrasted with “dumb,” and (3) the ethnic difference in IQ scores means an ethnic difference in intelligence/smartness, it is hard to see how the book can be seen as arguing anything other than that black people tend to be dumber than white people, and Murray and Herrnstein should not have been surprised that their “black people are dumb” book landed them in hot water. (“We didn’t sat ‘dumb’! We just said dumber! And only on average! And through most of the book we said ‘lacking cognitive ability’ rather than ‘dumb’!”)

I have to admit, I’m guilty. When one of these wankers pops up to triumphantly announce that these test scores show that black people are inferior, I tend to reflexively focus on the interpretation of test scores and the overloaded concept of IQ and the unwarranted expansion of a number to dismiss people, when maybe, if I were more the target of such claims, I would be more likely to take offense at the part where he’s saying these human beings are ‘lacking in cognitive ability’, or whatever other euphemism they’re using today.

The problem isn’t that Murray got the math wrong (although bad assumptions make for bad math). The problem is that he abuses math to justify prior racist beliefs, exaggerating minor variations in measurements of arbitrary population groups to warrant bigotry against certain subsets. That ought to be the heart of our objection, that he attaches strong value judgments to numbers he has fished out of a great pool of complexity.

In part, too, the objection ought to be because somehow, his numbers tend to conveniently support existing racist biases in our society. But he consistently twists the interpretations to prop up ideas that would have been welcomed in the antebellum South.

We should be clear on why the Murray-Herrnstein argument was both morally offensive and poor social science. If they had stuck to what is ostensibly the core claim of the book, that IQ (whatever it is) is strongly correlated with one’s economic status, there would have been nothing objectionable about their work. In fact, it would even have been (as Murray himself has pointed out) totally consistent with a left-wing worldview. “IQ predicts economic outcomes” just means “some particular set of mental abilities happen to be well-adapted for doing the things that make you successful in contemporary U.S. capitalist society.” Testing for IQ is no different from testing whether someone can play the guitar or do 1000 jumping jacks or lick their elbow. And “the people who can do those certain valued things are forming a narrow elite at the expense of the underclass” is a conclusion left-wing people would be happy to entertain. After all, it’s no different than saying “people who have the good fortune to be skilled at finance are making a lot of money and thereby exacerbating inequality.” Noam Chomsky goes further and suggests that if we actually managed to determine the traits that predicted success under capitalism, more relevant than “intelligence” would probably be “some combination of greed, cynicism, obsequiousness and subordination, lack of curiosity and independence of mind, self-serving disregard for others, and who knows what else.”

I also learned something new. I read The Bell Curve years ago when it first came out, and it did effectively turn me away from ever wanting to hear another word from Charles Murray. But he has written other books! He also wrote Human Accomplishment: The Pursuit of Excellence in the Arts and Sciences, 800 B.C. to 1950, which Robinson turns to to further reveal Murray’s implicit bigotry.

Human Accomplishment is one of the most absurd works of “social science” ever produced. If you want evidence proving Murray a “pseudoscientist,” it is Human Accomplishment rather than The Bell Curve that you should turn to. In it, he attempts to prove using statistics which cultures are objectively the most “excellent” and “accomplished,” demonstrating mathematically the inherent superiority of Western thought throughout the arts and sciences.

Oh god. I can tell what’s coming. Pages and pages of cherry-picking, oodles of selection bias that Murray will use to complain of cultural trends when all his elaborate statistics do is take the measure of the slant of his own brain. Pseudoscientists do this all the time; another example would be Ray Kurzweil, who has done a survey of history in which he selects which bits he wants to plot to support his claim of accelerating technological progress leading to his much-desired Singularity. Murray does the same thing to “prove” his prior assumption that black people “lack cognitive ability”.

How does he do this? By counting “significant” people. (First rule of pseudoscientists: turn your biases into numbers. That way, if anyone disagrees, you can accuse them of being anti-math.)

Murray purports to show that Europeans have produced the most “significant” people in literature, philosophy, art, music, and the sciences, and then posits some theories as to what makes cultures able to produce better versus worse things. The problem that immediately arises, of course, is that there is no actual objective way of determining a person’s “significance.” In order to provide such an “objective” measure, Murray uses (I am not kidding you) the frequency of people’s appearances in encyclopedias and biographical dictionaries. In this way, he says, he has shown their “eminence,” therefore objectively shown their accomplishments in their respective fields. And by then showing which cultures they came from, he can rank each culture by its cultural and scientific worth.

Then it just gets hilariously bad. Murray decides to enumerate accomplishment in music, of all things, by first dismissing everything produced since 1950 (the last half century has failed to produce “an abundance of timeless work”, don’t you know), and then, in his list of great musical accomplishment, does not include any black composers, except Duke Ellington. Robinson provides a brutal takedown.

Before 1950, black people had invented gospel, blues, jazz, R&B, samba, meringue, ragtime, zydeco, mento, calypso, and bomba. During the early 20th century, in the United States alone, the following composers and players were active: Ma Rainey, W.C. Handy, Scott Joplin, Louis Armstrong, Jelly Roll Morton, James P. Johnson, Fats Waller, Count Basie, Cab Calloway, Art Tatum, Charlie Parker, Charles Mingus, Lil Hardin Armstrong, Bessie Smith, Billie Holliday, Sister Rosetta Tharpe, Mahalia Jackson, J. Rosamond Johnson, Ella Fitzgerald, John Lee Hooker, Coleman Hawkins, Leadbelly, Earl Hines, Dizzy Gillespie, Miles Davis, Fats Navarro, Roy Brown, Wynonie Harris, Blind Lemon Jefferson, Blind Willie Johnson, Robert Johnson, Son House, Dinah Washington, Thelonious Monk, Muddy Waters, Art Blakey, Sarah Vaughan, Memphis Slim, Skip James, Louis Jordan, Ruth Brown, Big Jay McNeely, Paul Gayten, and Professor Longhair. (This list is partial.) When we talk about black American music of the early 20th century, we are talking about one of the most astonishing periods of cultural accomplishment in the history of civilization. We are talking about an unparalleled record of invention, the creation of some of the most transcendently moving and original artistic material that has yet emerged from the human mind. The significance of this achievement cannot be overstated. What’s more, it occurred without state sponsorship or the patronage of elites. In fact, it arose organically under conditions of brutal Jim Crow segregation and discrimination, in which black people had access to almost no mainstream institutions or material resources.

Jesus. This ought to be the approach we always take to Charles Murray: not that his calculations and statistics are a bit iffy, but that he can take a look at the music of the 20th century and somehow argue that contributions by the black community were inferior and not even worth mentioning. His biases are screamingly loud.

Unfortunately, while I suffered through The Bell Curve, this is so outrageously stupid that I’m not at all tempted to read Human Accomplishment, and I’m a guy who reads creationist literature to expose its flaws. Murray is more repulsive than even Kent Hovind (Hovind should not take that as an accolade, since that’s an awfully low bar.)

What happened to 2029?

Ray Kurzweil has been consistent over the years: he has these contrived graphs full of fudged data that tell him that The Singularity will arrive in 2029. 2029 is the magic date. We all just have to hang in there for 12 more years and then presto, immortality, incomprehensible wisdom, the human race rises to a new plane of existence.


2029 is getting kind of close. The Fudgening has begun!

The new date is 2045. No Rapture of the Nerds until I’m 88 years old. So disappoint.

Kurzweil continues to share his visions for the future, and his latest prediction was made at the most recent SXSW Conference, where he claimed that the Singularity – the moment when technology becomes smarter than humans – will happen by 2045.

Typical. You’ve got a specific prediction, you can see that it’s not coming true, so you start adjusting the details, maybe you change your mind on a few things (but it’s OK if you do it in advance, that way it doesn’t count against you), and you do everything you can to keep your accuracy score up, to fool the gullible.

Yeah, he’s got a score. 86%.

With a little wiggle room given to the timelines the author, inventor, computer scientist, futurist, and director of engineering at Google provides, a full 86 percent of his predictions – including the fall of the Soviet Union, the growth of the internet, and the ability of computers to beat humans at chess – have come to fruition.

Do any of those things count as surprising predictions in any way? They all sound rather mundane to me. The world is going to get warmer, there will be wars, we’ll have substantial economic ups and downs, some famous people will die, some notorious regimes will collapse, oceans rise, empires fall. Generalities do not impress me as indicative of deep insight.

Furthermore, that number is suspicious: you wouldn’t want to say 100%, because nobody would believe that. And you don’t want to say anything near 50%, because that sounds too close to chance. So you pick a number in between…say, somewhere between 75% and 90%. Wait, where did I get that range? That’s what psychics claim.

So, how accurate are psychics on an average? There are very few psychics who are 99% accurate in their predictions. The range in accuracy for the majority of real psychic readings are between 75% and 90%.

He’s using the standard tricks of the con man, ones that skeptics are supposed to be able to recognize and deal with. So how has Kurzweil managed to bamboozle so many people in the tech community?

I’m going to guess that being predisposed to libertarian fantasies and being blinded by your own privilege tends not to make one very skeptical or self-aware. Either that, or Kurzweil is very, very good at fooling people. I’m going to go with the former.

Finally! A perspective on AI I can agree with!

This Kevin Kelly dude has written a summary that I find fully compatible with the biology. Read the whole thing — — it’s long, but it starts with a short summary that is easily digested.

Here are the orthodox, and flawed, premises of a lot of AI speculation.

  1. Artificial intelligence is already getting smarter than us, at an exponential rate.
  2. We’ll make AIs into a general purpose intelligence, like our own.
  3. We can make human intelligence in silicon.
  4. Intelligence can be expanded without limit.
  5. Once we have exploding superintelligence it can solve most of our problems.

That’s an accurate summary of the typical tech dudebro. Read a Ray Kurzweil book; check out the YouTube chatter about AI; look at where venture capital money is going; read some SF or watch a movie about AI. These really are the default assumptions that allow people to think AI is a terrible threat that is simultaneously going to lead to the Singularity and SkyNet. I think (hope) that most real AI researchers aren’t sunk into this nonsense, and are probably more aware of the genuine concerns and limitations of the field, just as most biologists roll their eyes at the magic molecular biology we see portrayed on TV.

And here are Kelly’s summary rebuttals:

  1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  2. Humans do not have general purpose minds, and neither will AIs.
  3. Emulation of human thinking in other media will be constrained by cost.
  4. Dimensions of intelligence are not infinite.
  5. Intelligences are only one factor in progress.

My own comments:

  1. The whole concept of IQ is a crime against humanity. It may have once been an interesting, tentative hypothesis (although even in the beginning it was a tool to demean people who weren’t exactly like English/American psychometricians), but it has long outlived its utility and now is only a blunt instrument to hammer people into a simple linear mold. It’s also even more popular with racists nowadays.

  2. The funny thing about this point is that the same people who think IQ is the bee’s knees also think that a huge inventory of attitudes and abilities and potential is hard-coded into us. Their idea of humanity is inflexible and the opposite of general purpose.

  3. Yeah, why? Why would we want a computer that can fall in love, get angry, crave chocolate donuts, have hobbies? We’d have to intentionally shape the computer mind to have similar predilections to the minds of apes with sloppy chemistry. This might be an interesting but entirely non-trivial exercise for computer scientists, but how are you going to get it to pay for itself?

  4. One species on earth has human-like intelligence, and it took 4 billion years (or 500 million, if you’d rather start the clock at the emergence of complex multicellular life) of evolution to get here. Even in our lineage the increase hasn’t been linear, but in short, infrequent steps. Either intelligence beyond a certain point confers no particular advantage, or increasing intelligence is more difficult and has a lot of tradeoffs.

  5. Ah, the ideal of the Vulcan Spock. A lot of people — including a painfully large fraction of the atheist population — have this idea that the best role model is someone emotionless and robot-like, with a calculator-like intelligence. If only we could all weigh all the variables, we’d all come up with the same answer, because values and emotions are never part of the equation.

It’s a longish article at 5,000 words, but in comparison to that 40,000 word abomination on AI from WaitButWhy it’s a reasonable read and most importantly and in contrast, it’s actually right.

The gospel according to St Ray

Deja vu, man. Transhumanism is just Christian theology retranslated. An ex-Christian writes about her easy transition from dropping out of Bible school to adopting Ray Kurzweil’s “bible”, The Age of Spiritual Machines.

Many transhumanists such as Kurzweil contend that they are carrying on the legacy of the Enlightenment – that theirs is a philosophy grounded in reason and empiricism, even if they do lapse occasionally into metaphysical language about “transcendence” and “eternal life”. As I read more about the movement, I learned that most transhumanists are atheists who, if they engage at all with monotheistic faith, defer to the familiar antagonisms between science and religion. “The greatest threat to humanity’s continuing evolution,” writes the transhumanist Simon Young, “is theistic opposition to Superbiology in the name of a belief system based on blind faith in the absence of evidence.”

Yet although few transhumanists would likely admit it, their theories about the future are a secular outgrowth of Christian eschatology. The word transhuman first appeared not in a work of science or technology but in Henry Francis Carey’s 1814 translation of Dante’s Paradiso, the final book of the Divine Comedy. Dante has completed his journey through paradise and is ascending into the spheres of heaven when his human flesh is suddenly transformed. He is vague about the nature of his new body. “Words may not tell of that transhuman change,” he writes.

I’ve never trusted transhumanism. There’s a grain of truth to it — we will change over time, and technology is a force in our lives — but there’s this weird element of dogmatism where they insist that they have seen the future and it will happen just so and if you don’t believe in the Singularity you are anti-science. Or if you don’t believe in Superbiology, whatever the hell that is.

Anyway, read the whole thing. I’m currently at a conference at HHMI, and we’re shortly going to get together to talk about real biology. I don’t think the super kind is going to be anywhere on the agenda.

More money than sense

Take one terrible NY Times pundit who lives on an alien planet of her own, and toss her into the esoteric hothouse world of Silicon Valley, and all you’re going to get is a hot mess, a weird dive into the delusions of very rich smart people with no reality brakes to check out the truth. Maureen Dowd talks to Elon Musk and other pretentious luminaries. It’s painful if you prioritize critical thinking.

They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

In a world overpopulated with billions of people, where climate change is a looming threat, where all those people are a petri dish for cultivating new diseases, where the majority live in poverty, where in many places clean water is a struggle to find, where the most militarily powerful nation has just elected an incompetent, narcissistic clown to be in charge, two men sit down to talk. One says the most important project in the world is to put a tiny number of people on a barren rock. The other says the most important project is to create more powerful computers that can think on their own.

And then the two of them start arguing over the threat of artificial intelligences enslaving, or liberating, humanity. These intelligences don’t exist, and may not exist, and will definitely not exist in the form these smart guys are imagining. It is the grown-up, over-paid version of two children arguing over who would win in a fight, Darth Vader or Magneto? The Millenium Falcon or the Starship Enterprise? Jesus or Buddha?

And then Ray Kurzweil shows up.

Fuck me.

Dowd just parrots these absurd conversations and doesn’t offer any critical perspectives, and lord help us, the participants certainly don’t. Can we just lock them all in a well-padded room with an assortment of action figures and tell them to get to work to resolve the most important dispute in the universe, which toy is powerfulest?

Or could we at least have one skeptic in this mess to try and focus the discussions on something real?

If everyone from Yong to Zimmer says it’s true, it must be

You must have already read the tragic news: scientists have determined that I am doomed to die by 2072, when I turn 115, if not sooner. This was figured out by analyzing demographic data and seeing that people seem to hit a ceiling around age 115; the mean life expectancy keeps shifting upwards, but the maximum age seems to have reached a plateau. Carl Zimmer gives the clearest explanation of the methodology behind this conclusion, and Ed Yong gives a good description of the phenomenon of death in the very old.

The ceiling is probably hardwired into our biology. As we grow older, we slowly accumulate damage to our DNA and other molecules, which turns the intricate machinery of our cells into a creaky, dysfunctional mess. In most cases, that decline leads to diseases of old age, like cancer, heart disease, or Alzheimer’s. But if people live past their 80s or 90s, their odds of getting such illnesses actually start to fall—perhaps because they have protective genes. Supercentenarians don’t tend to die of major diseases—Jeanne Calment died of natural causes—and many of them are physically independent even at the end of their lives. But they still die, “simply because too many of their bodily functions fail,” says Vijg. “They can no longer continue to live.”

I agree with all that. I think there is an upper bound to how long meat can keep plodding about on Earth before it reaches a point of critical failure. But I’m going to disagree with Yong on one thing: he goes on to explain it in evolutionary terms, with the standard story that there hasn’t been selection for longevity genes, because all the selection has been for genes for vigor in youth, which may actually have the side effect of accelerating mortality.

This is true, as far as it goes. But I think it’s a different phenomenon, that we’re seeing a physico-chemical limitation that isn’t going to be avoided, no matter how refined and potent ‘longevity genes’ become.

When organized pieces of matter are stressed or experience wear, their level of organization decreases. You simply can’t avoid that. Expose a piece of metal in a car to prolonged periods of vibration and it will eventually fail, not because it was badly designed, but because its nature and the nature of its activity dictates that it will eventually, inevitably break.

Likewise a soap bubble is ephemeral by its nature. The same fluid properties that enable it to be blown doom it — the film will flow over time, it will tend to thin at the top, and eventually it will pop. There’s no way to suspend the physics of a soap bubble to let it last significantly longer, shy of freezing it and defeating the whole point of a soap bubble.

In people, we have a name for this wear and tear and stress: it’s called “living”. All these different things we do that make it worth existing are also fundamentally damaging — there’s no escaping the emergence of an ultimate point of failure.

115 years sounds like a reasonable best estimate from the current evidence. I’d also point out that this does not imply that we won’t find a common critical failure point, and find a way for medical science to push it up a year or five…but every such patch adds another layer of complexity to the system, and represents another potential point of failure. We’re just going to asymptotically approach the upper bound, whatever it is.

That’s OK. I’ll take 115 years. It also helps that it’s going to really piss off Aubrey de Grey and Ray Kurzweil.

Deliver us from the fury of the cyborgs and grant us the peace of cyberspace, O Lord

David Brin reviews some recent books on the future of artificial intelligence. He’s more optimistic than I am. For one, I think most of the AI pundits are little better than glib con men, so any survey of the literature should consist mostly of culling all the garbage. No, really, please don’t bring up Kurzweil again. Also, any obscenely rich Silicon Valley pundit who predicts a glorious future of infinite wealth because technology can just fuck right off.

But there’s also some stuff I agree with. People who authoritatively declare that this is how the future will be, and that is how people will respond to it, are not actually being authoritative, because they won’t be there, but are being authoritarian. We set the wheel rolling, and we hope that we aren’t setting it on a path to future destruction, but we don’t get to dictate to future generations how they should deal with it. To announce that we’ve created a disaster and that our grandchildren will react by creating a dystopian nightmare world sells them short, and pretending that they’ll use the tools we have generously given them to create a glorious bright utopia is stealing all the credit. People will be people. Finger-wagging from the distant past will have zero or negative influence.

Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!

They lectured and chided. They threatened damnation and offered heavenly rewards. Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Judeao-Christian-Muslim laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question:

“How’s that working out for you?”

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators, parasites and abusers –– just as it won’t divert the most malignant machines. Indeed, moralizing often empowers them, offering ways to rationalize exploiting others.

Beyond artificial intelligence, a better example might be climate change — that’s one monstrous juggernaut we’ve set rolling into the future. The very worst thing we can do is start lecturing posterity about how they should deal with it, since we don’t really know all the consequences that are going to arise, and it’s rather presumptuous for us to create the problem, and then tell our grandchildren how they should fix it. It’s better that we set an example and address the problems that emerge now, do our best to minimize foreseeable consequences, and trust the competence of future generations to cope with their situations, as driven by necessities we have created.

They’re probably not going to thank us for any advice, no matter how well-meaning, and are more likely to curse us for our neglect and laziness and exploitation of the environment. If you really care about the welfare of future generations, you’ll do what you can now, not tell them how they’re supposed to be.

The AI literature comes across as extremely silly, too.

What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”

Jesus. We don’t have these “intelligent machines” yet, and may not — I think AI researchers always exaggerate the imminence of their breakthroughs, and the simplicity of intelligence. So this guy is declaring that the big concern of this century, which is already 1/6th over, is an ethical crisis in dealing with non-existent entities? The comparison with religious authorities is even more apt.

I tell you what. Once we figure out how to coexist safely and ethically with our fellow human beings, then you can pontificate on how to coexist safely and ethically with imaginary androids.

Statistics are not a substitute for taking action

Yesterday it was Ray Kurzweil. Today it is Steven Pinker. What is it with these people trying to reassure us that the world is getting better for the average person? They’re the real world equivalent of the ‘This is fine’ dog.

Look, I agree with them: in many ways, the world is gradually getting better for some of us, and slowly, increasingly more people are acquiring greater advantages. I am personally in a pretty comfortable position, and I’m sure life is even better for oblivious buffoons hired by google to mumble deepities, or for Harvard professors. Pinker and Kurzweil even make the same trivial argument that it’s all the fault of the news:

News is a misleading way to understand the world. It’s always about events that happened and not about things that didn’t happen. So when there’s a police officer that has not been shot up or city that has not had a violent demonstration, they don’t make the news. As long as violent events don’t fall to zero, there will be always be headlines to click on. The data show — since the Better Angels of Our Nature was published — rates of violence continue to go down.

[Read more…]

At last, a sensible perspective on aging


The world is full of naive people who think we’re going to be immortal some day soon, in spite of all the evidence that says no (Kurzweil is a prominent example of such techno-optimism, as is Aubrey de Grey). It’s not just bad biology, it’s also bad physics, as Peter Hoffman explains. We’re all made of parts that are constantly being battered by thermal energy as an essential part of their operation, and damage accumulates until…we break down. This is unavoidable.

If this interpretation of the data is correct, then aging is a natural process that can be reduced to nanoscale thermal physics—and not a disease. Up until the 1950s the great strides made in increasing human life expectancy, were almost entirely due to the elimination of infectious diseases, a constant risk factor that is not particularly age dependent. As a result, life expectancy (median age at death) increased dramatically, but the maximum life span of humans did not change. An exponentially increasing risk eventually overwhelms any reduction in constant risk. Tinkering with constant risk is helpful, but only to a point: The constant risk is environmental (accidents, infectious disease), but much of the exponentially increasing risk is due to internal wear. Eliminating cancer or Alzheimer’s disease would improve lives, but it would not make us immortal, or even allow us to live significantly longer.

The article points out that we can accurately model mortality with only a few general parameters, and they’re rather fundamental and physics-dependent — we can tweak the biology as much as possible, but the underlying physical properties are going to be untouchable.

I would add, though, that while the mortality curves he shows are inevitable, biology can stretch and contract them, and we do have measurable variation in different species that shows that there is a kind of scaling factor to the curves in biological diversity — it’s not as if every species that lives at the same average temperature have identical life expectancies! Even within the human species, there are genetic variants that affect longevity, and clearly different life-style choices influence mortality, even though we’re every one of us ticking along at roughly the same 37°C. So please, yes, we can reduce the incidence of heart disease and cancer, and get a longer average lifespan…but even if we were to eradicate those major causes of mortality, we’re all going to get up around the century mark, and then we’re going to plummet off a cliff because of all the accumulated cellular damage and declining physiological efficiency.

By the way, one odd thing when I tried to find an illustration to accompany this post: I searched on “aging”. Almost all the photos on the web illustrate women by a huge margin. I am forced to conclude that only women suffer from the ravages of age; men simply get mature. But at least it’s one topic that women get to dominate!

The delusion of immortality


Imagine all the poor transhumanists who were born in the 19th century. They would have been fantasizing about all the rapid transformations in their society, and blithely extrapolating forward. Why, in a few years, we’ll all have steam boilers surgically implanted in our bellies, and our diet will include a daily lump of coal! Canals will be dug everywhere, and you’ll be able to commute to work in your very own personal battleship! There will be ubiquitous telegraphy, and we’ll have tin hats that you can plug into cords hanging from the ceiling in your local coffeeshop, and get Morse code tapped directly onto your skull!

Alas, they didn’t have a Ray Kurzweil or Aubrey deGray to con them with absurd exaggerations.

[Read more…]