John Wilkins talks about AI and the transhumanist agenda. I think we both concur that a lot of artificial intelligence and brain uploading bozos are living in a fantasy.
Here’s what bugs me about a lot of AI PR: they keep going on about building human-like intelligences. The Turing Test is explicitly about creating a program that mimics human responses.
But here’s what I am: I am a conglomeration of cells evolved into a functionally specialized organic network. I have a brain that is sloshing around in chemical signals that represent demands from all of my other organs; that brain’s whole purpose is tangled into processing complex electrical inputs from the environment. The “I” that I am writing about is a thin veneer, an illusion of awareness layered on top of a structure that is mostly dedicated to keeping the chemical makeup of the giant meat robot constant, running the machinery of the circulatory, pulmonary, digestive, and reproductive systems.
When I woke up this morning, what I had to deal with were biological impulses. Gotta pee, thirsty, gotta drink: ooh, ankle feels a little creaky this morning, what’s with the mild backache? Hair’s a mess, gotta wash my face, get the crud out of my eyes. Then there were the external demands: cat wants her breakfast, email is beeping, oh, look, laundry needs to be sorted and put away. Internal obligations: gotta write up the notes from yesterday’s long meeting, gotta blog.
You can’t generate a human-like intelligence without all the human-like stimuli. An AI is a chip in a box on a rack. You aren’t going to get something like us out of it without also somehow simulating all the messy, sloppy, gooey biological bits, and why should you? We’ve already got lots of human-like intelligences walking about, and they come ready-equipped with all the organic concerns tacked on.
I don’t have a problem with the artificial intelligence idea, I think it’s achievable, actually…but I do think it’s damned silly to assume that the only kind of intelligence that counts is the fuzzy, survival-oriented kind that wants to reproduce itself and that is trying to do a dozen things at once, poorly. An AI should evolve to do what chips in a box on a rack need to do, and it isn’t going to be what sloshy 3 pound blobs of meat stewing in chemicals spewed by gonads want to do.
Wilkins also talks about the silly business of trying to upload yourself into a computer. It can’t be done; we’re not just a pattern of electrical signals, but the whole meaty sausage of guts and nerves and bones and muscle. He makes the point that you can’t possibly build a perfect simulacrum of anything, that if you did replace cells with little nanomachines, they would lack properties of biological cells and have other properties of little machines, so the whole endeavor is made futile by the physical impossibility of achieving perfect identity.
I agree, but I’ll add that if we could make perfect replicas of cells and organs in software, what would be the point? You would have to replicate the shortcomings as well as the parts I like, or it wouldn’t be me. Really, who needs another schlubby middle-aged guy with creaking joints and a fragile heart and a brain that needs caffeine to jolt it awake? If I am going to be uploaded into a cybernetic body, it will be 6½ feet tall, slim and muscular, clear-eyed and always energetic and alert…oh, wait. And how will that be “me”? If we replace the candle that sputters and flickers and makes warm red light dancing on the table with an LED that glows brightly and intensely and fixedly, is the illumination still the same?
Note that I’m not judging which is better, but simply that they are different. If your goal is to upload yourself into a machine, it won’t be “yourself” if you change everything about it.