If everyone from Yong to Zimmer says it’s true, it must be

You must have already read the tragic news: scientists have determined that I am doomed to die by 2072, when I turn 115, if not sooner. This was figured out by analyzing demographic data and seeing that people seem to hit a ceiling around age 115; the mean life expectancy keeps shifting upwards, but the maximum age seems to have reached a plateau. Carl Zimmer gives the clearest explanation of the methodology behind this conclusion, and Ed Yong gives a good description of the phenomenon of death in the very old.

The ceiling is probably hardwired into our biology. As we grow older, we slowly accumulate damage to our DNA and other molecules, which turns the intricate machinery of our cells into a creaky, dysfunctional mess. In most cases, that decline leads to diseases of old age, like cancer, heart disease, or Alzheimer’s. But if people live past their 80s or 90s, their odds of getting such illnesses actually start to fall—perhaps because they have protective genes. Supercentenarians don’t tend to die of major diseases—Jeanne Calment died of natural causes—and many of them are physically independent even at the end of their lives. But they still die, “simply because too many of their bodily functions fail,” says Vijg. “They can no longer continue to live.”

I agree with all that. I think there is an upper bound to how long meat can keep plodding about on Earth before it reaches a point of critical failure. But I’m going to disagree with Yong on one thing: he goes on to explain it in evolutionary terms, with the standard story that there hasn’t been selection for longevity genes, because all the selection has been for genes for vigor in youth, which may actually have the side effect of accelerating mortality.

This is true, as far as it goes. But I think it’s a different phenomenon, that we’re seeing a physico-chemical limitation that isn’t going to be avoided, no matter how refined and potent ‘longevity genes’ become.

When organized pieces of matter are stressed or experience wear, their level of organization decreases. You simply can’t avoid that. Expose a piece of metal in a car to prolonged periods of vibration and it will eventually fail, not because it was badly designed, but because its nature and the nature of its activity dictates that it will eventually, inevitably break.

Likewise a soap bubble is ephemeral by its nature. The same fluid properties that enable it to be blown doom it — the film will flow over time, it will tend to thin at the top, and eventually it will pop. There’s no way to suspend the physics of a soap bubble to let it last significantly longer, shy of freezing it and defeating the whole point of a soap bubble.

In people, we have a name for this wear and tear and stress: it’s called “living”. All these different things we do that make it worth existing are also fundamentally damaging — there’s no escaping the emergence of an ultimate point of failure.

115 years sounds like a reasonable best estimate from the current evidence. I’d also point out that this does not imply that we won’t find a common critical failure point, and find a way for medical science to push it up a year or five…but every such patch adds another layer of complexity to the system, and represents another potential point of failure. We’re just going to asymptotically approach the upper bound, whatever it is.

That’s OK. I’ll take 115 years. It also helps that it’s going to really piss off Aubrey de Grey and Ray Kurzweil.

Deliver us from the fury of the cyborgs and grant us the peace of cyberspace, O Lord

David Brin reviews some recent books on the future of artificial intelligence. He’s more optimistic than I am. For one, I think most of the AI pundits are little better than glib con men, so any survey of the literature should consist mostly of culling all the garbage. No, really, please don’t bring up Kurzweil again. Also, any obscenely rich Silicon Valley pundit who predicts a glorious future of infinite wealth because technology can just fuck right off.

But there’s also some stuff I agree with. People who authoritatively declare that this is how the future will be, and that is how people will respond to it, are not actually being authoritative, because they won’t be there, but are being authoritarian. We set the wheel rolling, and we hope that we aren’t setting it on a path to future destruction, but we don’t get to dictate to future generations how they should deal with it. To announce that we’ve created a disaster and that our grandchildren will react by creating a dystopian nightmare world sells them short, and pretending that they’ll use the tools we have generously given them to create a glorious bright utopia is stealing all the credit. People will be people. Finger-wagging from the distant past will have zero or negative influence.

Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!

They lectured and chided. They threatened damnation and offered heavenly rewards. Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Judeao-Christian-Muslim laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question:

“How’s that working out for you?”

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators, parasites and abusers –– just as it won’t divert the most malignant machines. Indeed, moralizing often empowers them, offering ways to rationalize exploiting others.

Beyond artificial intelligence, a better example might be climate change — that’s one monstrous juggernaut we’ve set rolling into the future. The very worst thing we can do is start lecturing posterity about how they should deal with it, since we don’t really know all the consequences that are going to arise, and it’s rather presumptuous for us to create the problem, and then tell our grandchildren how they should fix it. It’s better that we set an example and address the problems that emerge now, do our best to minimize foreseeable consequences, and trust the competence of future generations to cope with their situations, as driven by necessities we have created.

They’re probably not going to thank us for any advice, no matter how well-meaning, and are more likely to curse us for our neglect and laziness and exploitation of the environment. If you really care about the welfare of future generations, you’ll do what you can now, not tell them how they’re supposed to be.

The AI literature comes across as extremely silly, too.

What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”

Jesus. We don’t have these “intelligent machines” yet, and may not — I think AI researchers always exaggerate the imminence of their breakthroughs, and the simplicity of intelligence. So this guy is declaring that the big concern of this century, which is already 1/6th over, is an ethical crisis in dealing with non-existent entities? The comparison with religious authorities is even more apt.

I tell you what. Once we figure out how to coexist safely and ethically with our fellow human beings, then you can pontificate on how to coexist safely and ethically with imaginary androids.

Statistics are not a substitute for taking action

Yesterday it was Ray Kurzweil. Today it is Steven Pinker. What is it with these people trying to reassure us that the world is getting better for the average person? They’re the real world equivalent of the ‘This is fine’ dog.

Look, I agree with them: in many ways, the world is gradually getting better for some of us, and slowly, increasingly more people are acquiring greater advantages. I am personally in a pretty comfortable position, and I’m sure life is even better for oblivious buffoons hired by google to mumble deepities, or for Harvard professors. Pinker and Kurzweil even make the same trivial argument that it’s all the fault of the news:

News is a misleading way to understand the world. It’s always about events that happened and not about things that didn’t happen. So when there’s a police officer that has not been shot up or city that has not had a violent demonstration, they don’t make the news. As long as violent events don’t fall to zero, there will be always be headlines to click on. The data show — since the Better Angels of Our Nature was published — rates of violence continue to go down.

[Read more…]

At last, a sensible perspective on aging

cells-aging

The world is full of naive people who think we’re going to be immortal some day soon, in spite of all the evidence that says no (Kurzweil is a prominent example of such techno-optimism, as is Aubrey de Grey). It’s not just bad biology, it’s also bad physics, as Peter Hoffman explains. We’re all made of parts that are constantly being battered by thermal energy as an essential part of their operation, and damage accumulates until…we break down. This is unavoidable.

If this interpretation of the data is correct, then aging is a natural process that can be reduced to nanoscale thermal physics—and not a disease. Up until the 1950s the great strides made in increasing human life expectancy, were almost entirely due to the elimination of infectious diseases, a constant risk factor that is not particularly age dependent. As a result, life expectancy (median age at death) increased dramatically, but the maximum life span of humans did not change. An exponentially increasing risk eventually overwhelms any reduction in constant risk. Tinkering with constant risk is helpful, but only to a point: The constant risk is environmental (accidents, infectious disease), but much of the exponentially increasing risk is due to internal wear. Eliminating cancer or Alzheimer’s disease would improve lives, but it would not make us immortal, or even allow us to live significantly longer.

The article points out that we can accurately model mortality with only a few general parameters, and they’re rather fundamental and physics-dependent — we can tweak the biology as much as possible, but the underlying physical properties are going to be untouchable.

I would add, though, that while the mortality curves he shows are inevitable, biology can stretch and contract them, and we do have measurable variation in different species that shows that there is a kind of scaling factor to the curves in biological diversity — it’s not as if every species that lives at the same average temperature have identical life expectancies! Even within the human species, there are genetic variants that affect longevity, and clearly different life-style choices influence mortality, even though we’re every one of us ticking along at roughly the same 37°C. So please, yes, we can reduce the incidence of heart disease and cancer, and get a longer average lifespan…but even if we were to eradicate those major causes of mortality, we’re all going to get up around the century mark, and then we’re going to plummet off a cliff because of all the accumulated cellular damage and declining physiological efficiency.


By the way, one odd thing when I tried to find an illustration to accompany this post: I searched on “aging”. Almost all the photos on the web illustrate women by a huge margin. I am forced to conclude that only women suffer from the ravages of age; men simply get mature. But at least it’s one topic that women get to dominate!

The delusion of immortality

steampunkbattleship

Imagine all the poor transhumanists who were born in the 19th century. They would have been fantasizing about all the rapid transformations in their society, and blithely extrapolating forward. Why, in a few years, we’ll all have steam boilers surgically implanted in our bellies, and our diet will include a daily lump of coal! Canals will be dug everywhere, and you’ll be able to commute to work in your very own personal battleship! There will be ubiquitous telegraphy, and we’ll have tin hats that you can plug into cords hanging from the ceiling in your local coffeeshop, and get Morse code tapped directly onto your skull!

Alas, they didn’t have a Ray Kurzweil or Aubrey deGray to con them with absurd exaggerations.

[Read more…]

Reconstructing a brain

spiny

Every once in a while, I get some glib story from believers in the Singularity and transhumanism that all we have to do to upload a brain into a computer is make lots of really thin sections and reconstruct every single cell and every single connection, put that data into a machine with a sufficiently robust simulator that executes all the things that a living brain does, and presto! You’ve got a virtual simulation of the person! I’ve explained before how that overly trivializes and reduces the problem to an absurd degree, but guess what? Real scientists, not the ridiculous acolytes of Ray Kurzweil, have been working at this problem realistically. The results are interesting, but also reveal why this work has a long, long way to go.

In a paper from Jeff Lichtman’s group with many authors, they revealed the results of taking many ultrathin sections of a tiny dot of tissue from mouse cortex, scanned them, and then made 3-D reconstructions. There was a time in my life when I was doing this sort of thing: long hours at the ultramicrotome, using glass knives to slice sequential sections from tissue imbedded in an epoxy block, and then collecting them on delicate copper grids, a few at a time, to put on the electron microscope. One of the very cool things about this paper was reading about all the ways they automated this tedious process. It was impressive that they managed to get a complete record of 1500 µm3 of the brain, with a complete map of all the cells and synapses.

[Read more…]

Reality constrains the possibilities

Gary Marcus, the psychologist who wrote that most excellent book, Kluge: The Haphazard Construction of the Human Mind, has written a nice essay that tears into that most annoying concept that some skeptics and atheists love: that without a proof, we’re incapable of dismissing certain especially vague ideas. It’s a mindset that effectively promotes foundation-free ideas — by providing an escape hatch from criticism, it allows kooks and delusional thinkers, who are not necessarily stupid at all, to shape their claims to specifically avoid that limited version of scientific inquiry.

Marcus goes after two representatives of this fuzzy-thinking concept. Schmidhuber is an acolyte of Kurzweil who argues for a “computational theology” that claims that there is no evidence against his idea, therefore the universe could be a giant software engine written by a great god-programmer. Eagleman is a neuroscientist who has gotten some press for Possibilianism, the idea that because the universe is so vast, we should acknowledge that there could be all kinds of weird possibilities out there — even god-like beings. “Could be” is not a synonym for “is”, however, and science actually demands a little more rigor.

Some people love to claim that an absence of a single definitive test against an idea means that it is perfectly reasonable to continue believing in it. Marcus will have none of that.

In particular, Eagleman, who drapes himself in science by declaring to “have devoted my life to scientific pursuit,” might think of each extant religion as an experiment. Followers of many religions have looked for direct evidence of their beliefs, but (by Eagleman’s own assessment) systematically come up dry. And, crucially, statisticians have shown decisively that a collection of failed efforts weighs more heavily than any single failed effort on its own. The same thing happened, of course, when scientists looked for phlogiston, and cold fusion, too. Nobody has proven cold fusion doesn’t exist, but most scientists would assign a low probability to it because so many attempts at replicating the original have failed. Any agnostic is free to believe that his favorite religion has not yet been completely disproven. But anyone who wishes to bring science into the argument must acknowledge that the evidence thus far is weak, especially when it is combined statistically, in the fashion of a meta-analysis. To emphasize the qualitative conclusion (X has not been absolutely proven to be false) while ignoring the collective weight of the quantitative data (i.e., that most evidence points away from X) is a fallacy, akin to holding out a belief in flying reindeer on the grounds that there could yet be sleighs that we have not yet seen.

That’s why I’m an atheist. Not just because there is no evidence for any god, but because all the available evidence points towards natural processes and undirected causes for the entirety of space and time. I wish people could get that into their heads. When we atheist-scientists go off to meetings and stand up for an hour talking about something or other, we generally aren’t reciting a religious litany and saying there’s no evidence for each assertion; rather, we go talk about cool stuff in science, how the world actually works, what the universe really looks like…and our explanations are sufficient without quoting a single Bible verse.

Will Smith must be stopped

He has a new movie coming out this summer, After Earth. It looks awful, but then, that’s what I’ve come to expect from Will Smith’s Sci-Fi outings.

Jebus. Anyone remember that abomination, I, Robot? How about I Am Legend? I steer clear of these movies with a high concept and a big name star, because usually what you find is that the story is a concoction by committee with an agenda solely to recoup the costs and make lots of money…so we get buzzwords and nods to high-minded causes and the usual action-adventure pap. Just looking at the trailer, I’m getting pissed off: it’s supposed to be a pro-environmentalism movie, and what’s it about? A guy running around in the wilderness fighting off the hostile wildlife.

Anyway, I got one of those generic invitations to help reassure the world that it’s a good science movie. Here’s part of what I was sent:

On May 31st, Columbia Pictures is releasing what is perhaps the biggest movie of the summer, After Earth, starring Will Smith, directed by M. Night Shyamalan.

No. Just no. Shyamalan is a hack. Why do people keep handing him big money and big projects?

There are a lot of science parallels to this film, and I write to see if you or a colleague might be interested in interviewing one of After Earth’s top filmmakers and or a scientist associated herein.

Famous futurist Ray Kurzweil

Jesus fuck. Kurzweil is a consultant? Pill-popping techno-geek with an immortality fetish and no understanding of biology at all is the consultant on a movie with a supposed environmental message? WHY?

explored with Will, his son Jaden Smith, and Elon Musk, how science fact meets science fiction in After Earth, and tghis can be seen here https://www.youtube.com/watch?v=RocpHuJWolc. As well, XPRIZE has teamed up with Sony to launch an unprecedented robotics challenge (information attached). What’s more, NASA plans to disseminate a lesson plan to teachers based on the scientific implications of After Earth, as seen here http://www.lifeafterearthscience.com/.

OK, I checked out the lesson plan. It’s not bad, but it has nothing to do with the movie — it’s all about biodiversity and cycles and climate change and that sort of thing, by a respectable author of biology textbooks. It’s a merkin to cover the toxic crap that will be in the movie.

In After Earth, earth has devolved, in a sense, to a more primordial state, forcing mankind to leave. One thousand years after this exodus, the planet has built up defense mechanisms so as to prevent the return of its previous human inhabitants. It might be said that nature reacted this way because it perceived humans as a threat to its survival.

“Devolved”? “Primordial state”? Look at the trailer. It’s a lush planet thick with plant and animal life, nothing to force people out. Except, of course, the bizarre hint that there are rapid — really rapid — weather changes (I won’t call it “climate”), in which you can be running through a temperate forest and suddenly a tree will freeze. Yeah, right. As for the teleological rationale, just gag it, goofballs.

Given the backing behind it, the extravagantly expensive Will Smith, the fact that he’s using it as a vehicle to give his son star billing, the horrible director, and the hints of bad science in the trailer, I’m going to call this one right now: it’s going to suck. It will be shiny and glossy and have lots of CGI, but it will suck hard.

I saw Iron Man 3 last night, and let me just say…I am so tired of SF movies that resolve all of their conflicts with a big battle with the baddies, preferably featuring huge explosions and impossible physics. This one is going to up the ante with idiot biology added to the profit-making mix.

They asked if I wanted to interview any of the scientists or writers involved. I don’t think so.

Although a conversation with Ray Kurzweil could be…fun.

Skeptech: help me!

Miri is justifiably enthused about Skeptech, which has just announced their schedule. It’s full of cool stuff and lots of interesting people — you should go if you’re anywhere near the Twin Cities. It’s free on 5-7 April — I’ll be there the whole weekend.

But I have a sad admission. I’m on the schedule. Look at my name. Look at my topic. TBD. Oh, sure, I’m in good company: Maggie Koerth-Baker is also TBD. But I have to fix that, and I’m planning to do that this week, since I’m staying home for Spring Break. So help me out, people! What should I talk about?

I’m also working up my Seattle talk, which is slowly congealing. I’m going to talk about scientific and atheistic ethics there, and the message isn’t hopeful: I’m going to discuss our woeful failures, and suggest that morality ain’t gonna be found in a test tube. But there’ll also be some optimism for how broadening our foundations to encompass humanist values can compensate.

Now I could do that talk at Skeptech, too, which would simplify things. But I’ve also been considering some other possibilities. Let me bounce a few ideas around here, you can tell me what sucks and what sounds fun.

  • A realistic look at transhumanism. What biology and the evidence of evolutionary history says about it (with some swipes at that clueless hack, Kurzweil, but also some talk about the neglect of developmental ideas by most transhumanists.)

  • Science and the internet. What scientists really ought to do with blogs, social media, open source publishing — where we’re going wrong, where we’re falling down on the job, where we’re succeeding.

  • The coming apocalypse. It’s not likely to be a sudden catastrophe, and it will make a lousy movie. It will be death by a thousand little cuts…but that means a thousand little band-aids might be the best strategy for staving it off. (A related panel is already on the schedule.)

  • The biology century. The 19th century was all about chemistry; the 20th was physics. The 21st will see a surge of biological innovation. What will the equivalent of the atom bomb be? What will be our flying car?

  • Or something completely different.

As you can tell, I looked at the schedule and noticed a dearth of science talks so far (Jen McCreight is also TBD, maybe she’ll help fill the gap), so I’m leaning sciencey, sort of science-fictioney even. If you’re going, or even if you aren’t, tell me what you think would be interesting and relevant.

What’s the matter with TED?

I enjoy many TED talks. I especially enjoy them because I only watch them when someone else recommends one to me — I’ve got filters in place. The one time I tried to sit down and go through a couple of random TED talks, I was terribly disappointed.

Carl Zimmer explains the problem with TED.

The problem, I think, lies in TED's basic format. In effect, you're meant to feel as if you're receiving a revelation. TED speakers tend to open up their talks like sales pitches, trying to arouse your interest in what they are about to say. They are promising to rock your world, even if they're only talking about mushrooms.

So the talks have to feel new, and they have to sound as if they have huge implications. A speaker can achieve these goals in the 18 minutes afforded by TED, but there isn't much time left over to actually make a case–to present a coherent argument, to offer persuasive evidence, to address the questions that any skeptical audience should ask. In the best TED talks, it just so happens that the speaker is the sort of person you can trust to deliver a talk that comports with the actual science. But the system can easily be gamed.

In some cases, people get invited to talk about science thanks to their sudden appearance in the news, accompanied by flashy headlines. Exhibit A, Felisa Wolfe-Simon, who claimed in late 2010 to have discovered bacteria that could live on arsenic and promised that the discovery would change textbooks forever. When challenged by scientific critics, she announced to reporters like myself that she would only discuss her work in peer reviewed journals. Three months later, she was talking at TED.

The problem can get even more serious in TED's new franchise, TEDx, which is popping up in cities around the world. Again, some TEDx talks are great. Caltech physicist (and DtU editor) Sean Carroll talking about cosmology? Whatever you've got, I'll take. But some guy ranting about his grand unified theory that he promises will be a source of  unlimited energy to fuel the planet? Well, just see how far you can get through this TEDx talk before you get loaded into an amublance with an aneurysm.

So there’s the problem: audiences pay a shocking amount of money to attend a TED session, and what they expect is an epiphany delivered every 20 minutes. That’s not how science works. You know that every excellent talk at TED is backed by 10 or 20 years of incremental work, distilled down to just the conclusion. Most of the bad talks at TED are people trying to distort the methodical approach of science into a flash of genius, and failing. Some of the bad talks are simply cranks babbling; the example Zimmer gives is a perfect illustration of that. Cranks are really good at making grandiose claims, and in a setting in which no data has to be shown and no questions can be asked, pseudoscience shines (and by the way, what is it with kooks and swirling donuts?)

Another odd connection: I wonder if this tendency to inflate the baby steps of science into grand world-changing leaps contributes to or is fueled by Kurzweilian transhumanism and an exaggerated sense of progress in science?