Today’s exciting game will be played with quotes from Softbank Robotics CEO Masayoshi Son given at the Mobile World Congress in Barcelona. A tech CEO? This will be a target-rich opportunity. You can expect a flurry of ambitious exaggerations from this one!
Players at home, you know what to do: get your buzzers ready, and slap that big red button and be prepared to give a succinct summary of what exactly was wrong with the statement. If you are chosen, you stand a chance to win fabulous prizes.
Are you ready? Brace yourselves, here it comes:
In 30 years, the singularity
Whoa! That was quick! The switchboard lit up like a Christmas tree with that one. Too easy?
OK, first answer is from Ronald in Ohio, who takes exception to the 30 year claim. No, I’m sorry, Ronald, you do not win a prize. That number is actually correct. As we all know, the singularity is always 30 years away.
Our next caller is Darlene in Seattle, who asks, “What the heck is a singularity?” and — judges, what is your call on that? — the judges say yes! That is a damn good question! Pierces right to the heart of the issues! It’s a quasi-mystical boojum that is invoked in place of the idea of “heaven”, which makes many technocrats uncomfortable because it is too unsciencey.
But poor Son, we didn’t even finish his quote. Here’s the rest:
will happen and artificial intelligence in all the smart devices and robots will exceed human intelligence.
Ouch! Our board lit up so bright that the room lights flickered and dimmed! Let’s take…caller #1274. Vonda in Florida, what’s your criticism?
“Thanks for taking my call, PZ. I’ve been trying to get through for years, and this is my first time on.”
Great, Vonda. And the flaw you spotted is…?
“Well, there’s a couple: one is that he can’t define ‘human intelligence’, and another is that he can’t possibly define it as a single scalar in a range on one axis that you could speak of something exceeding something else.”
Excellent, Vonda! Judges? Yes, the judges agree! Let’s move on with this juicy speech.
Just to give you a hint, Son is about to try to answer Vonda’s question:
Son says that by 2047, a single computer chip will have an IQ of 10,000 — far surpassing the most intelligent people in the world.
Yikes! The responses are pouring in —
Dmitri in Siberia: “…absurd reductionism. You can’t assign intelligence a single number…”
Kim in Korea: “…what kind of IQ test can generate scores that high…”
Jim in Manitoba: “…if you can measure the IQ of a computer, tell me what the IQ of a Dell Windows 10 machine is right now…”
Rudy in New South Wales: “…God won’t let a computer get that smart…”
Andrea in New York: “…IQ tests are designed to test human minds…”
OK! Except for Rudy, you all win!
I’m going to let Son complete his thought. Don’t buzz in on this one, gang, we’re just going to let him finish digging that hole already.
Where the greatest geniuses of the human race have had IQ’s of about 200, Son says, within 30 years, a single computer chip will have an IQ of 10,000. “What should we call it,” he asks. “Superintelligence. That is an intelligence beyond people’s imagination [no matter] how smart they are. But in 30 years I believe this is going to become a reality.”
I know. It’s embarrassing. The man is a CEO and he doesn’t understand what IQ is, and thinks that sticking a “super” prefix on something makes it clever or informative. Maybe he’s just hoping that if he lives another 30 years, he might learn something.
Let’s go on. This one is for scoring:
Son built this prediction by comparing the number of neurons in a brain to the number of transistors.
Uh-oh. The Big Board is on fire. Literally on fire. Hold those calls!
He builds the comparison by pointing out that both systems are binary, and work by turning on and off.
Oh, christ, we’ve got a thousand enraged neuroscientists trying to get through. Watch out! Those cables are shorting out! Get the studio audience out of here!
According to his predictions, the number of transistors in a computer chip will surpass the number of neurons in a human brain by 2018. He is using 30 billion as the number of neurons, which is lower than the 86 billion that is estimated right now, but Son says he isn’t worried about being exactly right on that number.
Oh god. He actually said he isn’t worried about being exactly right on the number? With this audience? Cut the power. Cut the power! Call emergency services!
Wait, what’s that loud rumbling sound I’m hearing from the bowels of the building? The generators? GET OU…
Corey Fisher says
Transistor counts will make computers smarter than us? Well, that sounds reasonable. Let’s see what supercomputers look like right now…
http://www.anandtech.com/show/6421/inside-the-titan-supercomputer-299k-amd-x86-cores-and-186k-nvidia-gpu-cores
Oh, okay. 45 trillion transistors from putting it all together. Wow, the Singularity is here already! We’ll have to start pulling out our brains and smushing multiple brain’s worth of neurons into a single head to keep up.
slithey tove (twas brillig (stevem)) says
mathematicians say singularity is well defined in maths. imagine plotting a curve where the Y value is the reciprocal of the X value. at X= 1 Y=1, at X= 1/2 Y=2, … as X approaches 0 Y starts going vertical, at X = 0 Y is now “singularity”. A lot of theoretical physics includes “tricks” to get values by integrating circles looping around singularities of the field equation
Moore ( of Moore’s Law) was noting the history of the density of transistors per chip divided by cost and simple extrapolation produced a singularity curve by failing to include sigmoid curve possibility, where a curve starts out looking like exponential growth then gradually slows down and flattens out. I suspect computer growth is more sigmoidal than purely exponential. One must consider the complete set of factors involved and not just the first couple orders.
*breath* /pedantry
*gulp*
fun game “spot the error” hope the station fixes its difficulties soon. standing by.
?
dhabecker says
What! Wait! A Game? Why didn’t I get notified?
Awww, it’s just a joke; took a while for my neurons to kick in; stupid little buggers.
A Happy Atheist at play; good stuff.
Siobhan says
-9,001
killyosaur says
http://www.thebestschools.org/magazine/limits-of-modern-ai/ Great article about how AI may never achieve what we think of as “Human Intelligence” or even intelligence. All the same complaints as mentioned by PZ, just a bit more historical context :)
rietpluim says
Geez, that argument is so bad, even Rudy from New South Wales wins!
birgerjohansson says
While I cannot rule out a future series of breakthroughs in the understanding of human cognition and its neural correlate, we are at present like a Victorian researcher wondering if he (or she) can use the kerosene lamp to power a prototype laser.
springa73 says
I always thought that a single neuron was a lot more complex than a single transistor – for example, that it could send and receive many signals simultaneously.
If artificial intelligence is possible, I don’t think it will happen until we know a lot more about biological intelligence. Or maybe not – we could end up creating a type of intelligence in a machine that works quite differently than biological intelligence. In any case, though, I wouldn’t trust any prediction of exactly how long it will take.
jaybee says
There isn’t 30 years of scaling left, in addition to all the other problems with his argument.
Lithography scaling has been slowing down, and is very close to hitting a wall that could be shifted only by moving to a new, as yet undeveloped, technology. Each upgrade to production lines each generation are more and more expensive. Also, leakage current is growing dramatically as the oxide layer gets thinner and thinner. But after a few more shrinks, we’ll be at the point where quantum effects start to dominate, making the circuits to unreliable to use, even with error correction techniques.
whywhywhy says
#4
I always thought it was equal to the poor schmuck who is using it.
Pierce R. Butler says
jaybee @ # 9: … we’ll be at the point where quantum effects start to dominate…
Right in synch with all those predictions of quantum computing in thirty years, too!
LykeX says
I don’t get it. Why would we put advanced AI in an assembly line robot or a smart phone? Is he supposing that AIs will somehow multiply and spread?
A Masked Avenger says
That part isn’t that farfetched. Eventually we’ll be using code that is evolved to work (literally, as in genetic algorithms) which we don’t fully understand how it works — we just know it works. That code would need to be copied around, because we wouldn’t be able to extract the key bits to use in, say, a mobile app.
So an AI that diagnoses your car, or that figures out the route you’ll enjoy the most (as opposed to the shortest, the quickest, or the one with the fewest turns), is likely to live all sorts of weird places. (Or, which is mostly equivalent, clients will live all sorts of weird places that act as the AI’s avatars. Think Siri.)
jensmith says
Son says, within 30 years, a single computer chip will have an IQ of 10,000. “What should we call it,” he asks.
I believe its called a “MultiVac”
colinday says
An IQ of 10,000? What was Data’s IQ? Q claimed an IQ of only 2,005.
Mortal Q (about 2:50 in)
Terska says
What he really means is “give me your money suckers!”
This machine will be so smart it will be called a Trump.
richardelguru says
“all the smart devices and robots will exceed human intelligence”
For some humans I suspect they already do…..
wzrd1 says
So many errors in such a small conceptualization of what a neuron does.
A transistor is either on or off, single input, either base current or gate voltage. One does not modulate firing, it fires or it does not.
How many different neurotransmitters are there again? Neuromodulators? How precisely do they influence neural firing?
But, singularity shares one thing with gainful fusion for energy production: They’ve always been 30 years in the future and shall always remain so.
Thus far, modeled successfully to some point after proton decay has been long forgotten.
unclefrogy says
As I read this and the comments a thought occurred.
people want super smart robots (machines) and other devices so that they can have the machines do stuff for them stuff that they would otherwise have do for themselves. the perfect slave with out any human rights or unions or political opinions.
then I thought of “the singularity” and did that thinking also pertain to it?
Yes in this way the singularity means that we can upload our minds into a machine there by letting a machine take over our living while we just exist as thought.
neither speak very well for self esteem and seem to be about supporting our laziness at the expense of in this case machines.
the latter seems the most laziest of all not even willing to make the effort to life normal physical life but effortless machine life instead.
the idea is as ridiculous as heaven and hell any way.
uncle frogy
wzrd1 says
Well, I have a robot vacuum cleaner, to do the light clean-up from day to day dust.
But, it’s far from very bright. I have pocket calculators with more processor power.
But, if a machine ever became “bright” enough to approach any type of sentience, I’d have major qualms over slavery.
Here’s the fun part, how would one know when a machine did reach that level of capability? It’d be substantially different from us by its very nature. Would we be intelligent enough to recognize a new intelligent form of machine life?
John Morales says
wzrd1:
You should read up on Alan Turing.
You ever (officially) own a dog?
(If so, did you ever doubt it was sentient? If so, did you ever have major qualms over slavery?)
chigau (ever-elliptical) says
無
unclefrogy says
dogs have the distinct characteristic of liking the company of humans so much so that it is thought that they domesticated themselves.
They also share our mammalian heritage need to eat, sleep, process bodily waste and reproduce sexually.
no machine I am aware of does any of those things. it’s awareness is confined by what imputes it is given. I think it would be very hard to recognize if it was self-aware as we know it unless we wrote that into it’s instruction set.
I heard a report the other day on NPR about training bees to do complex actions
so what is sentience and how do we recognize it when it is going to be so alien to us.
uncle frogy
blf says
No, otherwise transistor-based analogue amplifiers would rather be a problem. When used in digital circuits it is “on” or “off”, a configuration often known as a “switch”.
alkisvonidas says
Given how easy these people think “intelligence” is, they must deem the “greatest geniuses” in history to have been essentially morons.
Instant intelligence, just add chips! There’s nothing to it! Neural architecture, that’s irrelevant! Learning, training, experience, trial and error, that’s for wussies! Captain, I’ve calculated our chances of success to 16 significant digits. Fascinating! Live long* and prosper.
*well, at least another 30 years.
machintelligence says
It will hire a lawyer and take you to court. You have been warned. Govern yourselves accordingly.
howardappel says
Thanks for the link to the TBS article. It was most enlightening and enjoyable to read, even if I don’t understand 99% of the math.
Also, I used to post on your site long ago and then, for some reason, stopped visiting. My loss. I truly missed the Friday Cephalapod feature, as well as your insightful posts. Keep up the good fight and RESIST.
wzrd1 says
@John Morales, nope, beyond US legal terms, I’ve never owned a dog. Any dogs in our household were partners and companions.
Now, we have a cat and well know that *he* owns us. Orders us to turn on the sink so that he can get a drink of water, orders his dinner, orders us in no uncertain terms when he wants grooming, etc. But, he’s actually still a partner. He also keeps down the population of infiltrating anole lizards and occasional mouse.
But then, the cat’s breed is Russian Blue. Our previous, now deceased cat was a Bangadesh cat.
No animal that we’ve accepted into our house is a slave, but a partner. Protection is always voluntarily mutual.
When one respects any creature with any degree of sentience with the respect due to a sentient creature, loyalty is mutually earned. :)
That does give me qualms at times, typically mealtime, but I am an obligate omnivore. And my income prohibits supplementation that would support a totally vegetarian diet.
Now, your question would’ve challenged me if you asked about chickens, even if I don’t currently have a hen house. ;)
They, I question their sentience and I would require massive evidence of sentience in a domestic turkey. The latter, such a dull creature that I watched it try to eat the light off of a man’s cigarette, only to burn its tongue, only to then try to pick up the still burning ember and attempt to eat it – three times.
wzrd1 says
@uncle frogy, that was my entire point, a lack of context.
I would share no common context with an AI. I have a body, it has none. I have no purpose in my design, it does. I have organic needs, it does not. I have organic drives and instincts, it would have different drives and non instincts.
Just establishing meaningful commutations would be problematic.
As in, we’re in discussion on a topic of interest, I have to go to the bathroom, the AI may very well fail to comprehend the very real need not to shit my pants takes priority over continuing the conversation, let alone the need for sleep, meals or even be with one’s family.
It’d actually be a bit easier with multiple interacting AI’s, as the concept of friend might occur to them.
Otherwise, we’d be, potentially, an outside context problem.
wzrd1 says
@blf, re; transistor amplifier vs switch correction.
Thanks, it was late and I was trained during analog and digital transition times.
I see a transistor as both, depending upon how one biases the device.
One can use a knife as a pry bar, rare in technology does one find a device that does both equally well, depending upon its support.
Well, unless one is speaking of levers… ;)
wzrd1 says
@alkisvonidas, yeah, rookie errors.
Just add CPU and memory, it’ll work.
Folks, go out and buy the motherboard of your choice. Pick up as much memory (RAM) as the bloody things can take. Pick up some power supplies. Get a groundwire and tie the grounds together for the lot of power supplies and motherboard grounds (or get cases and strap the cases together), to avoid ground loop electrocution, if you get absurdly large in your grand design.
Link the USB ports (or any other bus or port of your choice) for bidirectional communications.
Light the lot up and wait.
I’ll wait for it to communicate intelligently or even to actually communicate.
Processing power and memory is nothing without a program. A program is nothing without the ability to perform its intended function. Communications are nothing without the ability to intelligently communicate, be it via human experience communication or intra-process communication.
I have a cluster of computers – only as far as processing certain, chosen videos or when compiling programs. The rest of the time, they’re individual computers, each to general and specific (and general/specific) tasks.
Back in school, I had a few friends who were not in my main group of friends. My largest group of friends were in our LEP program (Learning Enrichment Program), the school incessantly offered me that carrot, to little delight on the direction they chose. I elected a different path, one that had major challenges, repeated career changes, even major technology changes. The LEP was for those who the school district, via their own strict guidelines achieved certain scores. I fell below due to a lack of turning in homework, but still did well on tests, typically in the 99 score, if I fell to 95%, I was upset and re-studied.
A bit more background, back in 7th grade, I had standardized tests, due to some irregularities, I was scheduled another retest and one was via a metered, overglorified filmstrip reading machine.
The first thing I did was turn it to line at a time. The second thing I did was turn it to full speed.
I aced the test, that and my adjustments and reading rates were noted and it was also noted that I was previously severely dyslexic.
So, hence, the pressure.
I had already chosen a technical career path.
At a technical level, I fully comprehend theory. I also fully comprehend technical.
While theory still tempts me to go and get my degree, I still prefer the far more complex tasks of comprehending what is going on, how it is going on and why, then addressing it.
Now, I’m in information security. Where, I’ve witnessed an after the fact attack data set, went to the packet level and examined what was going on, captured the files being transferred, noted the method and prepared a final report before our CIRT did.
What CIRT finally managed – five years later, was to find that which was outside of both of our tools, but finally gathered in by a new set of tools, the source of the incursion.
In the real world, that means, I know how AND, OR and XOR work in circuitry terms and logical terms. I do Karnaugh maps in my head, Resultant Set of Policy in an enterprise environment, also in my own head.
I express myself poorly online, but well in person.
And if I rub you wrong, wonder, I think differently and if *I* am worried about an AI that would obviously think differently, when interacting and I’m problematic at times to parse, to be generous…
Seriously, do you think *you* would do better?
I’ve dealt with a dozen different cultures. I’ve dealt with pure logical circuits, even to flip switch programming a computer. I’ve dealt with networks and do IP netmasks in my head and I’m infamously bad with basic mathematics, but know sets and subsets and their logic.
We’re having problems (along with my significantly other experience base), how in the hell do you think that you’ll communicate with an entirely foreign governed mental process and manage to intelligently communicate?
When, even *I*, who works now, each and every day and in reality, also every day at home in my lab, hope to communicate with such an entirely foreign entity?
Figure out a common ground is first. That’s common with humans, it’ll be common there, but that is *after* you establish common protocols and likely, intermixed, as it may not have or have entirely different emotional contexts.
Why is that so hard to figure out?
wzrd1 says
Oh wait, I suspect one thing.
Trying to figure out a newborn’s cries of discmnfort.
Something that this speaker found discomforting in the extreme, then found Momma, who knew the sounds and gestures.
Now, we know those few things, AI, not at all.
John Morales says
wzrd1, I appreciate your response @28, and clearly you got my point.
In passing, I draw your attention to the distinction between sentience and sapience, because it’s clearly the latter you use as a determinant.