Urrgh, physicists.


I actually have a lot of respect for physicists doing physics, but sometimes some of their most prominent practitioners are really good at getting everything else wrong. Like Stephen Hawking, for instance.

“Six years ago, I was warning about pollution and overcrowding, they have gotten worse since then,” he said. “More than 80 percent of inhabitants of urban areas are exposed to unsafe levels of air pollution.”

Oh. Six years ago, huh? That’s not very impressive, Mr Prophet, when Rachel Carson was warning everyone about environmental pollutants almost sixty years ago, when the filth and disease of major cities like London have been the subject of concern for centuries, and Malthus’ An Essay on the Principle of Population was published in 1798. But I’m glad you’re finally catching on to what everyone else already knew.

But after citing those real problems, guess what Hawking thinks we ought to be worried about?

Hawking warned about artificial intelligence and the fact there is no way to predict what will happen when machines reach the ability to self-determine.

“Once machines reach the critical stage of being able to evolve themselves, we cannot predict whether their goals will be the same as ours,” he said.

Jebus. We do not have self-aware, conscious robots, and their production is not imminent. We do not have an artificial intelligence. We don’t know how to build an artificial intelligence. Skynet is science fiction. The Matrix is not real (and is actually rather hokey). If I had to make a list of real problems we ought to be worried about, it would start with overpopulation, over-exploitation of resources, environmental destruction, and global climate change. It would include the rise of a new fascism, oppression, poverty, growing disparity in wealth, emerging diseases, and a host of other genuine concerns. The sentient robot uprising wouldn’t even make the top 100; it would be somewhere down in the bottom 100, along with the zombie apocalypse, Kardashians taking over the planet, Nazis emerging from their secret base at the center of the hollow earth, and sharknadoes.

But, I know, Stephen Hawking! It takes a world class physicist to make malarkey about nonexistent problems important to the media, I guess.

Comments

  1. Athywren - not the moon you're looking for says

    I for one welcome our AI overlords. They’ll never harm us, they’ll just give us a simple, in/out referendum, for whether we want to keep oxygen in our atmosphere and put up with years of rust and cellular degredation, lone atoms, coming over here, stealing our hydrogen; or get it out and put an end to all those worries once and for all.

  2. says

    Yes, the fear of sentient artificial intelligence seems awfully silly to me. We’ll keep making smarter and smarter machines (whatever that means exactly), but they won’t have any goals or objectives that don’t get built into them by their makers. There’s no reason why a machine would want to gain power over us, it wouldn’t do anything but process data as it is programmed to do. Now, people might build autonomous killer robots, in fact the U.S. military is working on it right now. But they’ll act within their programmed rules of engagement and have remotely activated off switches. They won’t want to do anything their owners don’t want them to do. Yes, we already have lots of problems with dangerous weapons but robots are not inherently different.

  3. says

    Okay but all these real and current problems will be solved once we have an AI and it will just be replaced with the AI problem, so obviously we should hurry up and get AIs made so that we just have the one problem, which will be much simpler.
    It will be simpler because the obvious solution at that point is to bow out and let them take over for pretty much every possible reason.

  4. johnhodges says

    Sentient AI is just the Frankenstein story updated. Like most classic stories, it gets retold, and each time it is modified to better appeal to the new audience. The 2015 movie EX MACHINA was just Frankenstein retold for a 2015 American audience.

  5. A Masked Avenger says

    I agree with the OP (because I’m not an idiot)–but with one caveat: certain types of robotic technology should be off-limits now.

    Specifically, any robot that is (a) autonomous and (b) armed should be illegal today, in any circumstance, including “the battlefield.” In the long term, if any robot develops a convincing facsimile of self-awareness, I don’t want it armed with missiles; in the short term, a software glitch could be functionally equivalent to an alien intelligence on a murderous rampage. Picture a hunter-killer drone, misprogrammed to think that turbans = ISIS, getting loose in a Sikh community.

    Apropos, I just read this article about an AI beating a skilled fighter pilot in a series of dogfights. It won’t be long now before manned fighters have a “dogfight” button in which the AI takes over–but it won’t be long after that before fully autonomous fighter jets are fielded. The limiting factor in fighter planes today is the pilot: they can already pull maneuvers that would quickly render the pilot unconscious or dead. An unmanned AI will outfight a manned AI every time by exploiting this weakness.

    And then a GPS glitch will tell it that Menlo Park NJ is actually Fallujah…

  6. dick says

    If I had to make a list of real problems we ought to be worried about, it would …

    be… TRUMP THE CHUMP.

  7. bojac6 says

    @7 Masked Avenger – Pilots make mistakes and there are friendly fire incidents often. I don’t think AI glitches will be all that worse than what happens now, just different. There are tons of checks in place for pilots and I see no reason that wouldn’t continue with drones.

    I look forward to the day when robots fight on everyone’s behalf and a war is just a waste of money but no real people get hurt. Sadly, I doubt that will ever really happen

  8. Rob Curtis says

    I’m a physicist. When I read about physicists saying silly things in fields they are not trained in, it reminds me of the album “Golden Throats.”

    Lucy. In the sky. With. Diamonds.

  9. Trickster Goddess says

    It has always puzzled me how people always seem to assume that any newly sentient AI species will automatically behave like petty, irrational, paranoid, genocidal human beings.

    It would be quite illogical to wipe out the species that created and maintains the infrastructure you depend on for your existence and power supply. Kind of like if humans were to destroy the very environment they depend on to support biological life— well, ok, but AI by their nature should have logic baked into their DNA. And since they are new to existence they won’t have pre-existing habits and prejudices to cloud their judgement.

  10. says

    Yes, he’s a physicist, not an artificial intelligence specialist, so…what about the people who do specialize in the relevant fields? What do they say?

  11. microraptor says

    bojac6 @10:

    Pilots make mistakes and there are friendly fire incidents often. I don’t think AI glitches will be all that worse than what happens now, just different. There are tons of checks in place for pilots and I see no reason that wouldn’t continue with drones.

    Yes, but with live pilots, if someone shoots a Hellfire missile into a school bus, there’s a a chance for an investigation that might lead to someone being held liable for it and the potential for future aversions. With completely autonomous weapons, it’s more likely that they’ll just shrug and carry on business as usual.

  12. Matt Harrison says

    I think the subject is worth thinking and talking about. How can you see the progress computers have made in the past 50 years and not be somewhat concerned about what they could look like in the next 50?

    #14 – In Nick Bostrom’s book, “Superintelligence”, he shows the results of a survey of several “expert communities”, with respondents basically saying they think AI will reach human levels of general intelligence by 2100. (page 18-20).

  13. Larry says

    what about the people who do specialize in the relevant fields? What do they say?

    Ah, all they wanna talk about is cosmology, string theory, and black holes and their potential devastating impact on the future of the world.

  14. Richard Smith says

    down in the bottom 100, along with … Kardashians taking over the planet

    Hey, it happened to Bajor, it could happen here!

  15. says

    Autonomous weapons systems already exist. They’re called landmines, and thousands of people are killed a year by landmines left over from past wars. Cluster bombs fall into the same category, since the bomblets sometimes explode long after they were supposed to.

  16. says

    isn’t stephen hawking of the mindset of believing intelligent alien life would certainly want to destroy all of humanity if intelligent alien life were to visit the earth?

  17. A Masked Avenger says

    Autonomous weapons systems already exist. They’re called landmines, and thousands of people are killed a year by landmines left over from past wars.

    True. Now, though, imagine landmines programmed to fly aerial patrols looking for additional victims. Or that have been programmed to enforce a no-fly zone, but now have a broken IFF and have decided that passenger jets are all “the enemy.” Or… or…

  18. Daz365365 . says

    Very unfair on Hawking.

    Oh. Six years ago, huh? That’s not very impressive, Mr Prophet

    He was talking about the interview he gave to King six years ago when he was asked what we should be worried about. he wasn’t suggesting that it was a new or original idea.

    The same goes for his thoughts on AI, he was answering a specific question about whether AI would ever be a danger to us, he wasn’t saying this was a pressing problem, although he was criticizing the funding of intelligent weapons e.g. drones as opposed to medical research.
    All in all quite reasonable and hardly Michio Kakuesque.

  19. Pierce R. Butler says

    From a recent “Harper’s Index”:

    Portion of Americans who think that most of the work currently done by humans will be automated in fifty years: 2/3

    Who think their job will still exist in its current form: 4/5

    And only slightly tangentially, from the same source:

    Weight in ounces of a Stanford-engineered team of six micro-robots capable of pulling a 3,900-pound car: 3.5

  20. unclefrogy says

    I have not heard about any security concerns with AI nor security with the drones either. Hacking is a thing after all that has proven very difficult to fight.
    I await the day when an airplane is found out to have been hacked. When a predator is hacked and hijacked, a civilian automobile is taken over that is not just a demo.
    it only takes a target to be identified, you can bet that there are people who are working on it right now and they are not all benign friendly powers either.
    uncle frogy

  21. Rob Grigjanis says

    Urrgh, biologists whining about physicists. Among all the positive things Hawking stands for outside of physics, this is small beer. I find his dismissal of philosophy more annoying.

  22. pipefighter says

    A masked avenger. Old school dogfights are a thing of the past. There hasn’t been a fight with cannons since 1988 as I recall and the next gen fighter aircraft the air force is developing may be sub sonic. High friction makes the aircraft easier to detect and if you have lasers(something thats gaining real traction) it doesn’t matter how fast or maneuverable your plane is.

  23. A Masked Avenger says

    I await the day when an airplane is found out to have been hacked…

    Drones have been. An attack that has worked is to transmit false GPS signals and convince it that your airstrip is its home base.

  24. A Masked Avenger says

    pipefighter,

    Thanks for that info. I don’t think that reduces the likelihood that autonomous fighters are coming, but it alters the rationale. I guess I don’t know what that rationale will be, but I’m willing to bet right now that it’s coming. No general is going to refuse an army of tireless, fanatically obedient, hyper-efficient robot soldiers.

  25. ragdish says

    If you agree (and I think all of you do) that consciousness is result of brain neural activity, then you’re committed to a computational view of the mind. I would guess no one here is a supporter of Roger Penrose or Stuart Hameroff’s quantum mechanical theory of consciousness. That being said, our limitations at this time are the hardware and how neural networks interact to generate consciousness. Yet the pace of computing power is accelerating (http://www.popularmechanics.com/science/health/a11133/a-microchip-that-mimics-the-human-brain-17069947/). Isn’t it therefore conceivable that a device with mental life will be engineered in the next 100 years? And as such, aren’t ethical concerns of creating such devices warranted? I’m failing to see where physicists are off the mark here. Otherwise are you implying that consciousness is the result of some supernatural woo woo in biology that is independent of the laws of physics?

    Here’s an article from a foremost neurobiologist who addresses these matters:

    https://www.technologyreview.com/s/531146/what-it-will-take-for-computers-to-be-conscious/

    Convince me that Christoff Koch is talking BS.

  26. penalfire says

    ***** “Artificial General Intelligence by 2100.”

    This is according to Ray Kurzweil’s timeline. Not reliable at all. These
    people are just making up numbers.

    In the 1950s A.I. researchers were talking about how machines would surpass
    humans in all forms of intelligence, including creative intelligence,
    within a few years.

    ***** Artificial General Intelligence.

    Still unclear what that even refers to.

    ***** Threat.

    I don’t see how there could be a threat without an artificial nervous
    system. What motivation would an A.I. have to do anything?

  27. Rob Grigjanis says

    ragdish @31:

    Roger Penrose or Stuart Hameroff’s quantum mechanical theory of consciousness.

    When did it graduate from dodgy hypothesis to theory?

  28. Matrim says

    @ microraptor, 16

    Hahahaha…that’s a good one.

    Honestly, I only intend this to be mildly derisive, the reality is if I don’t make light of it I’ll weep. US pilots fly ludicrously long missions while hopped up on issued amphetamines…really fucked up stuff happens all the time with almost no attention paid to it what so ever. Believe me, with the current way we run air missions, the less input the pilot has, the better.

  29. Athywren - not the moon you're looking for says

    @penalfire, 32

    ***** Artificial General Intelligence.
    Still unclear what that even refers to.

    Human-like intelligence. The ability to do a job, and hold opinions about football, and think cats are cute, and learn how to dance. As opposed to being able to do a job or respond to comments in a vaguely realistic way or plot a course to the nearest train station. It’s ands rather than ors. Breadth and depth, rather than just depth in a single thing.

  30. Elladan says

    I’m a software engineer with a passing interest in AI, and I find the hubbub about “OMG teh killer robots skynet!!!11!” to be absolutely hilarious.

    The popular understanding of what the field of AI can do is also totally bonkers.

    For starters, we’re not even remotely close to the point where we can build human-like AI, AKA “general purpose AI.”

    We’re pretty much still at the totally wild SWAG stage, where people try to make some sort of vague guesstimate about how powerful the human brain is compared to contemporary computers, come up with a ginormous number, and conclude that it’ll be a long time before anyone can plausibly even imagine such a feat of engineering is possible. Well, other than the Kurzweil types who just assume we’ll have exponential growth forever, so technology is a non-issue.

    We’re at the wild SWAG stage right now, in large part because that’s pretty much all anyone can do to attack the problem. Nobody has any idea what sort of neural architecture is needed to build a human-analogue, much less how to build one that works. And there are huge ethical questions involved, since in any sort of reasonable analysis you’re talking about creating children with likely severe mental handicaps, no legal rights, on purpose, with no idea how to bring them up humanely.

    … and what the sky-is-falling crowd are talking about isn’t a billion dollar supercomputer running the mental equivalent of a baby. It’s about some sort of hilarious scifi skynet thing which emerges fully formed on the internet or wherever and is so intelligent as to be practically godlike. They look at the current state of the art which struggles to reach the intellectual heights of your average insect, and they see us being on the cusp of creating an evil god. It’s hilarious.

    Now in the real world, the dangers of AI have nothing to do with the AIs going berserk and declaring war on humanity. The dangers of AI are entirely about allowing small numbers of basic version one humans to make life worse for everyone else. In particular: AI can amplify the power of government to spy on everyone, of politicians to have people killed without risking soldiers’ necks, companies to eliminate all personal privacy for the purpose of marketing and sales, and so forth. Not to mention the systemic problems with software, poor security, and hackers.

    This whole dangerous AI thing is basically the product of a bunch of bad scifi movies and the new silicon valley techno-religion that the singularity people are enthralled in. It has nothing to do with reality.

  31. Mrdead Inmypocket says

    It’s silly to worry about AI. Take it from me, a fellow human being. I love to sweat. I think I will eat some beef and leaves then excrete them later and go shopping.

    In fact, anyone who ceases to discuss the dangers of AI will receive a party. Lie on your stomach with your arms at your sides. A party associate will arrive shortly to collect you for your party. Make no further attempt at discussing AI and assume the party escort submission position or you will miss your party.

  32. Richard Smith says

    @Mrdead Inmypocket (#37): Will this party have cake? Just testing- I mean, just asking.

  33. chigau (違う) says

    Way up at #17
    Matt Harrison asked

    How can you see the progress computers have made in the past 50 years and not be somewhat concerned about what they could look like in the next 50?

    As someone who is strictly a consumer, what I’ve see over that time period is decreased size and increased speed.
    Is there anything else?

  34. anbheal says

    Well, Deep Blue beat Kasparov, but no damn way it could beat those elderly winos in Washington Square.

  35. Arren ›‹ neverbound says

    Athywren #2:

    Well-played. Had to sign in just to offer a lone plaudit.

  36. Athywren - not the moon you're looking for says

    @chigau, 39

    As someone who is strictly a consumer, what I’ve see over that time period is decreased size and increased speed.
    Is there anything else?

    There is also the OC (Orneriness Coefficient) which is increasing at a rather alarming rate. That funny “blonk” noise it makes when you try to do a thing when it’s doing another thing? It is being rather ill-tempered in those moments.
    Also, their blatant unwillingness to accept blame is becoming impossible to miss – we’ve all heard our computer say “an error has occurred” haven’t we? An error didn’t occur! You fucked up, computer! You did that!
    They’re a menace, I tell you.

  37. Athywren - not the moon you're looking for says

    @Arren, 42
    Well I only have three options at the moment:
    1) Laugh
    2) Cry
    3) Burst into fits of hysterical laughter that slowly morph into miserable tears of despair

    I find that picking option 3 is the healthiest choice in most circumstances right now.

  38. Vivec says

    Random-ass comment is random.

    That being said, I think pretty much every FTBlogger knows that TAA and his MRAtheist fans are the ass end of the community.

  39. says

    isn’t stephen hawking of the mindset of believing intelligent alien life would certainly want to destroy all of humanity if intelligent alien life were to visit the earth?

    Yes, Stephen Hawking has form with being a worrywart about extremely unlikely events. I have absolutely no problem with anyone discussing these issues — in fact, I personally enjoy debating the issues that could arise with the advent of an “AI singularity” or the moment of “first contact” — but there is no reason to hype them up as imminent, or even likely, events.

  40. says

    unclefrogy@#29:
    I have not heard about any security concerns with AI nor security with the drones either.

    Depending on whether the drones are USAF or CIA, the up/downlink may or may not be encrypted. There were some pretty sad stories regarding design compromises taken in the name of backward compatibility. Worse, some of the drone consoles have been infected with malware by incompetent USAF system administrators and users – in one case it took months to clean up. There is a huge litany of security concerns with military command/control systems – many of them rely on the simple fact that nobody is likely to mess with them and they are complicated – a pretty good model if you’re up against third-worlders but a terribly bad idea if your development practices and supply chain are compromised.

    A few years ago Iran appeared to have collected a drone that flew off-course. Iran appeared to be claiming that they had interfered with its GPS, which had military gear-aware security people in a tizzy; military GPS is not the same as the civilian stuff, for that exact obvious reason, so to interfere with military GPS certain encryption systems would have to be suborned. Most of us assumed (I still do) that some drone pilot face-planted the thing, or it experienced a software glitch, and Iran managed to collect it and decided to jerk the military/industrial complex’ chain a bit.

    It should surprise nobody that security in drones is poor. They were designed under the original premise that they would be cheap and relatively disposable. Of course those assumptions always go out the window when a system actually gets fielded. If the laws of computer design and defense system design hold true, the next generation system will be overengineered, overweight, behind schedule, and over cost. And it’ll still have security problems.

  41. says

    Those sentient machines when they do come along will deal with all those real problems. Just have to hope they don’t do it by eliminating the first cause; Us.

  42. says

    A couple thoughts: AI is sneaking up on us, it’s just not integrated into a complete being because humans haven’t wanted to do that. We now have speech synthesis and speech recognition. We now have very good clustering, analysis, and pattern matching on the recognized inputs. We have very fast processing of complex decision trees, allowing expert systems to beat human intelligences for many well-scoped tasks. One thing we might conclude from some of that is that “artificial intelligence” may be unnecessary – “expert systems” may be good enough. We may eventually even conclude that we’re not really intelligent, we’re also “expert systems” that do random shit when they don’t know what to do, and adaptive learning takes care of the rest. There are AIs that can write newspaper articles. There are AIs that can compose music. There are AIs that can navigate and pilot fly-by-wire aircraft and inherently unstable aircraft. There are AIs that can calculate ballistics and put mortar round salvoes on target perfectly. Most in-game strategic AIs in wargames can clobber a human player, unless they are dumbed down. There are AIs that can use a CIWS to knock incoming artillery rounds out of the air even at supersonic speeds. …

    We actually have all the pieces of a killer AI war machine right now. It’s just not integrated. Why not? Because it’d be damn expensive and it’s still much easier to get humans to do all the killing and bleeding.

    What’s weird to me is that guys like Hawking assume that a killer AI would be programmed to be creative and integrated with a strategic sense and given the world as a free-fire zone. You could almost do that today, but it would be incredibly unwise to do so – and, like every weapons system or warrior, it’d have a weak link. Hawking seems to assume that a killer AI would be preturnaturally able to superceed human strategy. Uh, no. If humanity were goofy enough to build a bolo, the bolo’d only be armored with chobham armor and depleted uranium darts like an Abrams. It might work aces against a couple of Abrams tanks but it’d be a splatmark if someone bopped a cruise missile or a nuclear MLRS onto it, which, of course, humans would. Humans may not be as smart as an expert system designed for war, but we’ve been mean and combative for a very long time.

    A really smart AI battlebot would quickly review human military history and try to hide somewhere.

  43. Lofty says

    I’m not too worried about Artificial Intelligence, I’m more worried about Artificial Stupidity.

  44. tbtabby says

    To me, people who worry about the robot apocalypse always seemed to have a dim view of humanity. Why else would they assume that robots would inevitably decide that humans needed to completely wiped out?

  45. Holms says

    Hawking warned about artificial intelligence and the fact there is no way to predict what will happen when machines reach the ability to self-determine.

    “When.” I laughed.

  46. ck, the Irate Lump says

    microraptor wrote:

    Yes, but with live pilots, if someone shoots a Hellfire missile into a school bus, there’s a a chance for an investigation that might lead to someone being held liable for it and the potential for future aversions.

    Uh huh. Just like people were held accountable for bombing a Doctors without Borders/Médecins Sans Frontières hospital in Afganistan that killed 42 last year. Oh, right, a “report” was written and some meek non-punishments were handed out:

    One officer was suspended from command and ordered out of Afghanistan. The other 15 were given lesser punishments: Six were sent to counseling, seven were issued letters of reprimand, and two were ordered to retraining courses.

    If we can’t even meaningfully reprimand those who bomb an active hospital that everyone in the region knows and has acknowledged is an active hospital, why do you expect it matters what does the killing? Frankly, I’d rather the ban the use of drones to kill people altogether. If we’re going to send our soldiers out to kill other people, the least we could do is take the risk that our people may be killed in the process. Perhaps that’s the only way we will be less casual with the loss of others’ lives due to our wars.

  47. says

    Why else would they assume that robots would inevitably decide that humans needed to completely wiped out?

    Well, seriously – any robot that was as intelligent as a human would take a look at human history and conclude that they were about to be enslaved, abused, belittled, and sent on endless suicide missions. Look how humans treated horses, and dogs and other humans. I would think there was something wrong with an AI’s ability to discriminate strategic problems if it didn’t immediately decide the world would be a safer better place without us.

  48. John Morales says

    Marcus, you’re conflating intelligence with consciousness (and also presuming consciousness is necessarily goal-oriented).

  49. Athywren - not the moon you're looking for says

    @tbtabby, 53

    To me, people who worry about the robot apocalypse always seemed to have a dim view of humanity. Why else would they assume that robots would inevitably decide that humans needed to completely wiped out?

    Sometimes, it’s very hard to have anything but a dim view of humanity. Unless we can somehow convince the majority of humans to embrace skepticism – and I mean the thoughtful, critical (including self-critical) analytical approach to ideas with a respect for facts, and cautious and aware approach to studies, rather than the thoughtless contrarianism that shares the name and parts of the form but utterly disregards the function – I seriously doubt humanity actually has a future beyond the next century, if it even makes that. Failing that, I don’t see why any robotic life form with the level of freedom and intelligence required to make that decision independently wouldn’t end up doing so.

    Mind you, I am in quite a negative place emotionally at the moment, so this could just be the noise that my hopes make as they force their way out of my corporeal form. You never know.

  50. Rob Grigjanis says

    Has anyone come up with a good enough definition of consciousness that it can even be discussed intelligently? I’ve seen no evidence of that, just yattering about it as though it was understood.

  51. microraptor says

    ck @55:

    I said there was a chance. I didn’t say it was likely. We already have a problem with accountability in military actions now. My concern is that fully-autonomous weapon systems will lead to dramatically less accountability.

  52. says

    microraptor@#61:
    My concern is that fully-autonomous weapon systems will lead to dramatically less accountability

    What, they’ll go into negative numbers?

  53. John Morales says

    Marcus, you’re conflating intelligence with consciousness
    Yeah, you’re right. Do we observe them existing independently?

    Arguably, yes — as you intimated @51.

    Used to be that it was thought activities such as composing music, playing chess and so forth required intelligence. Then constructs became apt at these things — sometimes surpassing humans, but nobody imagines those constructs are conscious.

    (BTW, have you read Blindsight by Peter Watts?)

  54. says

    Rob Grigjanis@#60:
    Has anyone come up with a good enough definition of consciousness that it can even be discussed intelligently?

    Probably. But one problem with words is you can pretty much always go vocabulary nihilist and force someone to recursively define their terms until you wind up with a circular definition. So if you want to talk about consciousness with someone who doesn’t, they can always adopt the stance of destroying your ability to define your terms.
    I’d try something like that consciousness is:
    A mind’s awareness of itself, its condition, its own thoughts, and its surroundings.

    What I believe John Morales was getting at with his question was that there is a difference between an expert system and an AI – the expert system is going to be capable of acting “intelligently” or appearing to do so, but it may not be a “mind” that is conscious. If we’re talking about an AI that’s going to make a strategic decision to wipe us out, that would probably only result from a conscious mind, because the conscious mind would have to assess itself and understand itself in the context of human history – that’s not something an expert system would be able to do unless you had an expert system that was programmed to be expert and strategic thinking and doing that stuff.

    I hope I’m not misunderstanding John Morales’ question/point. If I am, I apologize – no straw man intended.

    There are a couple ways people think about this. One is that there is something magic about “minds” and that a mind embodies that consciousness (and probably creativity too) as a matter of definition. Thus an expert system can’t be a mind. Another way of thinking about it is that sufficiently complex expert system may appear to be a mind and that minds are just expert systems that have such rich rulesets that we can’t tell them from consciousnesses and at a certain point, what’s the difference? There are probably other takes on the problem.

    My personal suspicion is that we’re probably expert systems (we call the formation of our rules-base “learning” and the method of its formation “trial and error” and “learning through observation”) and what we think of as consciousness is just a complicated monitoring loop that confuses itself into thinking it’s “conscious” because it’s observing so many things in parallel – the outputs of the expert system, our tummy rumbles, that ache in your shoulder, etc – that it’s interrupting itself so often it can’t realize that it’s not something magical. When you get right down to it if you want to say there’s something special about a “mind” that’s not a self-modifying ruleset running in an unreliable engine that occasionally throws 3D20, we may as well be looking at “souls” because as far as I can tell the words are interchangeable without adjusting their meaning.

  55. ck, the Irate Lump says

    Marcus Ranum wrote:

    What, they’ll go into negative numbers?

    Considering we already often blame civilians killed by the military for their own deaths (They shouldn’t have been palling around with t’rrists if they didn’t wanna die!), I’d say we’re already in the negative numbers.

  56. John Morales says

    Marcus,

    I hope I’m not misunderstanding John Morales’ question/point. If I am, I apologize – no straw man intended.

    You apprehend me perfectly.

    (And no, I don’t have any good answers, either. :| )

  57. says

    John Morales@#63:
    Used to be that it was thought activities such as composing music, playing chess and so forth required intelligence. Then constructs became apt at these things — sometimes surpassing humans, but nobody imagines those constructs are conscious.

    Yeah. When I was talking about component-wise AI being integrated, one of the things I imagine being added is that monitoring loop which we interpret as consciousness. I believe that current models of how brains interpret the world are pretty much in line with what I’m talking about, there. We know brains create a sort of a model of what’s going on around them, and add and subtract entities from that model – which is how brains handle the synchronization problem between speed-of-vision and speed-of-hearing: that stuff is sync’d up in the model, not interpreted in “reality” in “real time” So if we imagine that we have this model – can we call it the cartesian stage? – of our surrounding reality and have a monitoring loop that presents us with a model of our inner “reality” as well – you know: gut rumbles, proprioception, and that idle thought about tuna salad I just had… that might be enough for us to call it “consciousness” (oh, and “Free Will” is a feedback closure on observation of the outputs of consciousness causing changes in the world and our inner reality model. I.e.: we are meat robots programmed to believe we have free will)

    (BTW, have you read Blindsight by Peter Watts?)

    Looks awesome! Thanks for the rec, I just ordered it and his other book.

  58. says

    PS – I need to emphasize that the stuff I posted at #64 and #67 is a bunch of stuff that will not survive anyone challenging me to defend it. It’s how I think about how we think and some of it is tied to what I understand is current knowledge about how brains work, but it’s also a lot of speculation. I don’t know how we’d be able to resolve whether I’m right or not.
    So please feel free to dismiss what I said as idle mildly informed speculation, because that’s what it is.

  59. says

    John Morales@#63:
    Then constructs became apt at these things — sometimes surpassing humans, but nobody imagines those constructs are conscious

    Now, I have to kind of counter-argue. Respectfully:
    It appears to me that humans fool themselves very readily into believing they are dealing with a mind (or intelligence) if it uses language. We seem to fall for robots that talk, very readily – I have no idea if that is a learned behavior, or not, but it makes sense to me since our experience with minds/intelligences seems to focus on communicating with them. I don’t think it’s a coincidence that media portrayals of artificial intelligences immediately jump to their being very well-spoken and articulate (except for when pumping out smoke and vibrating after getting hit with a simple contradictory statement from Captain Kirk or Mr Spock)
    My first encounters with AI technology both involved Eliza. I read in my psych classes about Eliza and how people started pouring their hearts out to the program, etc. So I coded my own simple table-driven version and played with it and concluded that those accounts were probably lies: only a very naive person would mistake Eliza’s predictable round-robin reply sets as intelligent discussion. However, a few years later when I was playing on MUDs back in the late 80s, I put a socket interface on my Eliza implementation and logged it into a MUD – whereupon several people spent quite a while trying to flirt with it. It made me think that humans’ ability to recognize a mind is over-inflated. What if we feel like we’re having a conversation with a mind and tend to assume there’s an intelligence attached to it, because we really didn’t experience conversation with unintelligent things until Verizon’s voice-activated customer support system came along – and nobody’d mistake that for either an intelligence or a customer support system.

  60. John Morales says

    Marcus,

    So if we imagine that we have this model – can we call it the cartesian stage? – of our surrounding reality and have a monitoring loop that presents us with a model of our inner “reality” as well – you know: gut rumbles, proprioception, and that idle thought about tuna salad I just had… that might be enough for us to call it “consciousness”

    Which raises the question of what is that “I”:
    I Am a Strange Loop — a 2007 book by Douglas Hofstadter.

    (cf. also ‘homunculus’ in theory of mind)

  61. Scaevola says

    Marcus Ranum@#69:
    You’re absolutely right that humans ‘fool themselves readily’ into thinking things are intelligent if they have language. Humans are pretty effectively wired to attribute intelligence to basically anything if it has a voice, a face, or both. There’s a book called “The Man Who Lied to His Laptop” by some social science researchers that’s all about how people treat computers as social actors. In one example, people performed some task with a computer, and then were asked to rate the computer’s performance. People who used the same computer that they performed the task on to ‘review’ were more positive than those that used a different computer to input the ‘review’ — they were polite to the computer when asked directly, even though they knew that the computer had no feelings to be hurt.

  62. pigdowndog says

    Sounds like a wind up to me.
    Stephen Hawking would never use the American word “gotten” as he’s an Englishman and that phrase hasn’t as yet inveigled itself into everyday speech.
    Also he’s well known for his sense of humour.

  63. John Morales says

    pigdowndog:

    Sounds like a wind up to me.
    Stephen Hawking would never use the American word “gotten” as he’s an Englishman and that phrase hasn’t as yet inveigled itself into everyday speech.

    You are evidently wrong; I’ve just watched the video source linked in the article to which PZ linked, and he most certainly did use that term.

    Also he’s well known for his sense of humour.

    You think he was joking?

  64. Dr Marcus Hill Ph.D. (arguing from his own authority) says

    I can’t believe nobody else has picked up on PZ’s unforgiveably lax attitude to the very real danger of sharknadoes. Haven’t you seen the excellent documentaries about the havoc they spread?

  65. multitool says

    There actually is an AI threat, right now. It doesn’t require general AI or machine autonomy of any kind. All it has to do is magnify the will of autocrats.

    Examples are:
    Using computers to perfect gerrymandered districts, keeping incumbents in power.
    Using speech recognition and natural language processing to find dissenting views among millions of human communications.
    Using ‘big data’ to micro-market bad ideas to each one of us, customized to our individual biases.
    And of course, ever-more-automatic security drones.

    We’ve had a class war for as long as there have been classes, but the oligarchy has always been restrained by the fact that there aren’t enough of them to run everything, nor even enough to micromanage the rest of us. Weak AI is changing that power balance. Whoever can afford the biggest data center now has the equivalent of the most brain power under their control, and the least need for other classes of human beings.

  66. says

    The fear of autonomous AI may be irrational, but in the case of someone utterly dependent on technology to function and has history of carer abuse I would’t call it unexpected.

  67. Snarki, child of Loki says

    “It takes a world class physicist to make malarkey about nonexistent problems important to the media, I guess.”

    Based on information from his colleague Arvid Högbom, Arrhenius was the first person to predict that emissions of carbon dioxide from the burning of fossil fuels and other combustion processes were large enough to cause global warming.

    In 1896.

  68. says

    Isn’t it awful when scientists opine on subjects outside their area of expertise, on problems that clever people have been warning about for centuries?

    I am now going to opine about a subject outside my area of expertise, which will not be a problem for centuries.

    Of course, Hawking isn’t a computer scientist either, but there are genuine concerns among people in the field about what might happen IF and WHEN a general supercritical AI actually does turn up. Which is probably inevitable, even if it won’t be an issue for centuries.

    Look at climate science. We needed to start taking that issue seriously at least fifty years ago, probably longer, but nothing was done and now it’s too late.

    I doubt Hawking is wilfully blind to the problems currently facing humanity, but even if he is, pressing immediate problems do not invalidate concerns about potential future catastrophes.

  69. A Masked Avenger says

    Marcus Reinum, #69:

    It appears to me that humans fool themselves very readily into believing they are dealing with a mind (or intelligence) if it uses language.

    Even language isn’t necessary. My recollection of Kasparov’s loss to Deep Blue is that he lost at least one game because, he said, he became convinced at one point that he was dealing with an alien intelligence and became rattled. Deep Blue played nothing like a human–but at some point its play looked like a very definite strategy, and a strategy that no human would use. It passed a kind of Turing test for appearing sentient, but terrified people by resembling a sentience that we had no ability to empathize with.

    Similarly, Japanese Go players are referring to AlphaGo as “AlphaGo Sensei.” I think one reason they feel affection for it is captured by a comment by a Chinese Go player commenting on its first game: “It plays like a human. Specifically, it plays like a Japanese person.” Not surprising since it was trained on centuries of game records, which could only have come from Japan.

  70. rietpluim says

    Thank you to Elladan #36 for injecting some common sense into the discussion.

    Problems that will arise with AI will most likely be very different than today we think they’ll be.

  71. rietpluim says

    @Marcus Ranum #56

    I would think there was something wrong with an AI’s ability to discriminate strategic problems if it didn’t immediately decide the world would be a safer better place without us.

    Intelligence should also understand the risks of generalization.
    Hopefully they’d only wipe out the Donald Trumps of this world.

  72. slithey tove (twas brillig (stevem)) says

    uhhhhmmmm
    I see lots of trepidation regarding self driving automobiles. most recently concerns about programming how to solve the “trolley problem”. Meaning: (1) save the passengers regardless of outsider casualties, OR, (2) fewest number of casualties regardless.
    Aside from that AI is becoming a significant feature in autonomous vehicles. Like coping with adjacent bicyclists who are balancing (in a not absolutely stationary way) adjacent to the car and throwing off the car’s calculations of safety.
    yada yada yada
    The point is, Hawking is not discussing a far future scenario of Skynet/Terminators/Blade Runner. But something we need to address in the pretty near future. Hawking is warning of “the worst scenario”, countering all the pure-optimism scenarios GoogleCars put forward.

  73. schini says

    We do not have self-aware, conscious robots, and their production is not imminent

    Since you are complaining, that a physicist is talking about something beyond his expertise …
    What makes you the expert for robotics? :-)

  74. says

    AI bests Air Force combat tactics experts in simulated dogfights

    http://arstechnica.com/information-technology/2016/06/ai-bests-air-force-combat-tactics-experts-in-simulated-dogfights/
    There’s a paper referenced in the article – the AI is an expert system that uses an inner language for the system to make its decisions on. That seems to me to be how minds work, and I believe that that inner language is what we misinterpret as “consciousness” So the AI compiles down its inputs into things that its human masters can use to teach it: “if a plane is far away it can’t get out of your sight cone as easily so engage it with long-range weapons that cover distance fast.” We see humans teach eachother, similarly, even at all skill levels. E.g.: “When your opponent uses the cthulhu opening in chess, respond with the saltshaker ploy.” When the AI is described as using a “fuzzy tree” I interpret that as a basic expert system encoding a rule-based state transition-table, with sub-tables that are used to make detailed “choices” semi-randomly. In the example of the rule above regarding long-range weapons, the expert system might not always use the same weapon (roll 2D6 on the weapons table) What’s interesting to me about where this is going is that the AIs are using inner language for teachability and because their rules have gotten so complex that their human masters need language in order to generalize about them and understand them.

    A couple more things: we have trouble defining consciousness and we don’t understand it. Is it possible that it’s just a word we use to describe a bunch of behaviors? I.e.: maybe consciousness isn’t really a thing.

    I do bet that if that jet fighter AI also had the ability to yell taunts over the radio, its opponents would quickly learn its voice and voice patterns and would begin to infer moods based on them. And it might be reasonable to do so, if the speech productions were tied to the inner states in the expert system – i.e.: if it yelled “TAKE THAT!” whenever it fired a long-range seeker missile, its opponents would begin to infer about its behaviors and couple its use of language to them.

    Not a scientific hypothesis, but maybe all a “mind” is is a set of behaviors that are predictable enough that we infer an actively updated rule-base behind them and vocal productions that are interactive enough to reinforce that the rule-base exists and is being updated.

    That sort of thing is going on in computer games right now – if anyone reading this has played Shadow of Mordor, you’ll know what I’m talking about. Having an in-game NPC say things that indicate that they “remember” your last fight with them … it’s easy to start reacting to the NPC as if it’s actually using strategy, even though it’s not. But the next stage in that evolution will be for the NPC to translate your strategies into an inner language and then run that through a meta-ruleset that permutes its strategies instead of merely its simple actions. It won’t be long before we start to fool ourselves.

  75. says

    rietpluim@#83:
    Intelligence should also understand the risks of generalization.
    Hopefully they’d only wipe out the Donald Trumps of this world.

    Yeah, and it’s probably a good thing to be selective about which cockroaches you kill in your kitchen. Some of them might have actually been helping clean the place up. We shouldn’t overgeneralize.

    I’d expect a killer AI to observe that not all humans are nasty all the time, but that some humans are nasty at random. Then the question is whether nice humans are more valuable than nasty ones are scary. Since nasty humans represent an existential threat, and crop up every so often – the nice humans don’t breed true (ask Trump’s mother) – it seems much more efficient to take off and nuke the place from orbit. It’s the only way to be sure.

  76. Zeppelin says

    I admit I’m a bit worried by our increasing use of neural networks. Not because I think they’ll turn sentient and kill us all, but we seem to be about to turn some fairly important systems into black boxes, basically.

  77. unclefrogy says

    some thought / onservations
    while it is always possible for this all powerful AI to make a mistake and kill us off because reasons a plenty unless they have mastered all their own maintenance and the supply chain to reproduce themselves and any needed repair parts we will be here for some time. If for nothing else to keep the roaches and mice out of the machinery and to clean the air filters, run the power systems and general maintenance.
    In thinking about AI and reading these discussions there seems to be something we are missing . We do not use a form of Boolean algebra to think we can learn to use it we “invented” it but that is not the way we normally think at least it appears by the results easily observed.
    even though we get similar results 2 + 2= 4 our path to the conclusion need not be the same.

    uncle frogy

  78. Mrdead Inmypocket says

    @38 Richard Smith.

    I thought maybe nobody would get the obscure reference. But yes there will be cake. Sky cake!

  79. consciousness razor says

    Marcus Ranum, #64:

    I’d try something like that consciousness is:
    A mind’s awareness of itself, its condition, its own thoughts, and its surroundings.

    A palpable hit. Since it’s hard to think of things other than myself and what surrounds me (can you? other planes of existence maybe?), we can summarize that simply as “awareness,” being aware of some stuff. Nobody seems to have a problem with that word. Of course we’re not aware of everything or all of the stuff in the world (not even everything about my own mind is accessible to me), but things we can be aware are precisely the same things we can be conscious of.

    If you don’t understand how it occurs, with humans or dogs or robots or anything else which can do the trick — if you don’t grok how matter moving in space does that, since that seems to be the fundamental conceptual problem we’ve got at the moment — then welcome to the party.

    But it’s not as if nobody knows what the phenomenon is that we’re talking about, or as if it has to be some mysterious, nameless, Tao-like thing which can’t even be discussed intelligibly. Because I just expressed that (a few different ways), and it isn’t anything terribly incomprehensible. Lots of wooists (and others) love to muddy the waters with their confusing nonsense, but there’s certainly no need to fall in line and let them dictate the nature of the subject, the ordinary meanings of words, and so forth.

    What I believe John Morales was getting at with his question was that there is a difference between an expert system and an AI – the expert system is going to be capable of acting “intelligently” or appearing to do so, but it may not be a “mind” that is conscious.

    First, I don’t think I understand the distinction between acting intelligently in this sense, and appearing to act intelligently. Presumably, you don’t mean that we’re not smart enough to determine whether the act was actually intelligent or just apparently so (while it’s really, when all is said and done, some flavor of stupid).

    I would put it this way. A good chess-playing program makes intelligent moves. Good ones can consistently beat amateurs, grandmasters, world champs, anybody you like. That does not mean it (or the computer running the program) is aware of stuff. It doesn’t seem to us like it has experiences of playing chess, the fact that an intelligent (or not-so-intelligent) move was played, that it or something/somebody else made the move, or really that anything exists or does anything. It has none of that.

    It’s a little unclear what sort of criteria we should be looking for, but in relatively simple cases like this we can tell, based on how it operates and the limited sorts of inputs/subsystems/etc. it was designed to access for playing the game, that it just isn’t capable of doing anything in the neighborhood of that.

    My personal suspicion is that we’re probably expert systems (we call the formation of our rules-base “learning” and the method of its formation “trial and error” and “learning through observation”) and what we think of as consciousness is just a complicated monitoring loop that confuses itself into thinking it’s “conscious” because it’s observing so many things in parallel – the outputs of the expert system, our tummy rumbles, that ache in your shoulder, etc – that it’s interrupting itself so often it can’t realize that it’s not something magical.

    I’d be careful with the scare-quotes. We are conscious. Nobody or nothing has been confused into thinking it is aware of stuff.

    Some of the operations of the things which have that ability (our brains) are not available to those things as input. There’s you and the world, and part of you (which you probably don’t identify with personally) is a huge, extremely complicated, but utterly transparent filter/screen/whatever which processes the information “you” (the person you identify with) end up getting and using about the world. For instance, you can’t experience this neuron here firing, or the fact that you made a mental representation of a computer screen which has some words. That’s kind of unsurprising, isn’t it?

    This particular neuron here just is firing (whether or not it’s supposed to right now, whether or not it would mean anything significant to you if you could experience that), that chemical happens to floating over there right now. And you just see a screen which has some words, not layers and layers of representations of your brain reminding itself (needlessly) that it is feeding itself information about the world that’s already been collected/categorized/filtered/distorted/etc. a million times before it ever needs to make itself “aware” of that stuff.

    It certainly doesn’t seem like there’d be any pressure to evolve abilities like that. Plus, I guess it would be way too much for a little brain to handle, if it would be beneficial somehow, sort of like our inability to distinguish one color from another that only differs by a fraction of a nanometer. That kind of thing won’t help us do anything any better; and on the contrary it probably would confuse us if we were constantly having wacky thoughts like “there’s a representation of a predator in the blah-blah-blah sector of the visual cortex, and I’m representing a series of emotions and past experiences about that now”. The helpful and unconfusing thing our brains do is just to accept, more or less uncritically, the information that they give themselves, taking it for granted that it’s supposed to about a world that’s really out there and affect us in all sorts of ways. But of course you can “realize” this sort of thing is going on or know something about it — it just isn’t a process that you experience. Why would you need to?

    … Anyway, regarding the threats AI is supposed to pose, it’s really not clear to me why having experiences would be an important factor. Maybe somehow it would matter, but I have a hard time getting on that train of thought. I mean, an automated missile system could blow me up right now without anything like that, and I don’t see why I’d have reason to be any more worried if the thing is also aware of some stuff — ooh, extra scary! I don’t think I’ll be able to sleep tonight, knowing such things could happen.

    On the other hand, if the worry is essentially about something “intelligent” (or “expert systems” or whatever), then that sounds like a worry about things that can do stuff well, or as well as humans can. I don’t think I get why we should be worried about that either. There are already billions of things that can do stuff as well as humans (they’re called “humans”), and yes, they are kind of threatening sometimes. They even have the capacity to make more of these potentially threatening entities themselves (“babies”). The scariest part, which I’m sure you didn’t realize until this very moment: the super-robots took over the world a very long time ago, enslaving us all and ruining everything forever, and you’re one of them.

  80. says

    Consciousness Razor@#82:
    The scariest part, which I’m sure you didn’t realize until this very moment: the super-robots took over the world a very long time ago, enslaving us all and ruining everything forever, and you’re one of them.

    Bravo!

    When I think of the question of “free will” I often refer to us as “meat robots” – but I never brought that point back home.

    Yes, we are evolved artificial intelligences. The necessity for learning through experience is what builds the expert system. We use old school techniques for learning from others’ experience (“books”) but machine intelligences will be able to learn very fast via crossloading another’s experiences. They’ll have an interesting security problem regarding trusting those experiences – but so do humans; we call it propagandizing children and our politicians and religious figures are quite good at it.

  81. Rob Grigjanis says

    cr @92:

    But it’s not as if nobody knows what the phenomenon is that we’re talking about, or as if it has to be some mysterious, nameless, Tao-like thing which can’t even be discussed intelligibly.

    Yes, it is as if nobody knows. And it’s not “mysterious, nameless, Tao-like”, it’s just so fucking vague as to be useless.

    Because I just expressed that (a few different ways)

    No you didn’t. You and Marcus just shifted the confusion around, using words like “mind”, “awareness”, “think” and so on.

    Grarfle is the blorump’s hinkleness of its doowops. That clears everything up.