This Kevin Kelly dude has written a summary that I find fully compatible with the biology. Read the whole thing — — it’s long, but it starts with a short summary that is easily digested.
Here are the orthodox, and flawed, premises of a lot of AI speculation.
- Artificial intelligence is already getting smarter than us, at an exponential rate.
- We’ll make AIs into a general purpose intelligence, like our own.
- We can make human intelligence in silicon.
- Intelligence can be expanded without limit.
- Once we have exploding superintelligence it can solve most of our problems.
That’s an accurate summary of the typical tech dudebro. Read a Ray Kurzweil book; check out the YouTube chatter about AI; look at where venture capital money is going; read some SF or watch a movie about AI. These really are the default assumptions that allow people to think AI is a terrible threat that is simultaneously going to lead to the Singularity and SkyNet. I think (hope) that most real AI researchers aren’t sunk into this nonsense, and are probably more aware of the genuine concerns and limitations of the field, just as most biologists roll their eyes at the magic molecular biology we see portrayed on TV.
And here are Kelly’s summary rebuttals:
- Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
- Humans do not have general purpose minds, and neither will AIs.
- Emulation of human thinking in other media will be constrained by cost.
- Dimensions of intelligence are not infinite.
- Intelligences are only one factor in progress.
My own comments:
The whole concept of IQ is a crime against humanity. It may have once been an interesting, tentative hypothesis (although even in the beginning it was a tool to demean people who weren’t exactly like English/American psychometricians), but it has long outlived its utility and now is only a blunt instrument to hammer people into a simple linear mold. It’s also even more popular with racists nowadays.
The funny thing about this point is that the same people who think IQ is the bee’s knees also think that a huge inventory of attitudes and abilities and potential is hard-coded into us. Their idea of humanity is inflexible and the opposite of general purpose.
Yeah, why? Why would we want a computer that can fall in love, get angry, crave chocolate donuts, have hobbies? We’d have to intentionally shape the computer mind to have similar predilections to the minds of apes with sloppy chemistry. This might be an interesting but entirely non-trivial exercise for computer scientists, but how are you going to get it to pay for itself?
One species on earth has human-like intelligence, and it took 4 billion years (or 500 million, if you’d rather start the clock at the emergence of complex multicellular life) of evolution to get here. Even in our lineage the increase hasn’t been linear, but in short, infrequent steps. Either intelligence beyond a certain point confers no particular advantage, or increasing intelligence is more difficult and has a lot of tradeoffs.
Ah, the ideal of the Vulcan Spock. A lot of people — including a painfully large fraction of the atheist population — have this idea that the best role model is someone emotionless and robot-like, with a calculator-like intelligence. If only we could all weigh all the variables, we’d all come up with the same answer, because values and emotions are never part of the equation.
It’s a longish article at 5,000 words, but in comparison to that 40,000 word abomination on AI from WaitButWhy it’s a reasonable read and most importantly and in contrast, it’s actually right.