How to Be Skeptical of a Technological Singularity

Chris Hallquist has hosted a guest post on his blog by Luke Muehlhauser, whom some of you might remember as the brilliant and balanced author of Common Sense Atheism and Worldview Naturalism, and who is now executive director of MIRI. As Hallquist describes the post, “Luke does not intend to persuade skeptics that they should believe everything he does about the technological singularity. Rather, he aims to lay out the issues clearly so that skeptics can apply the tools of skepticism to a variety of claims associated with ‘the singularity’ and come to their own conclusions.”

The description is apt. Luke’s article is a bookmarkable cornucopia of references and links and brief discussion of each, for anyone who wants to know what all this “singularity” business is about–or who know at least what it used to be about, but want to know what it has evolved into. Basically, if you were to read only one thing on the subject, it should be this. Luke’s article is “What Should Skeptics Believe about the Singularity?” Go give it a look. It’s fascinating, not only on it’s intended subject, but as a model example of how to approach speculative claims like this generally.


  1. trucreep says

    I will sadly always think of the botched Mass Effect ending when seeing people talk about the technological singularity :[

  2. Alexander says

    Just make sure to read the comments; some of us who work (or used to work for some 15 years) in strong AI know quite a bit about what goes on under the hood, and have a few things to say about the purported feasibility that lays down the structure on which this AI singularity is based on. Remember that passing a Turing Test is contextually impressive to the point where it mostly isn’t impressive at all. We can pull off some impressive isolated stuff, for sure, but only within very tightly controlled constraints, with tons and tons of special tweaking … by humans. Getting Siri to pull up a map might seem impressive, but try her to explain why driving off the cliff might be a bad idea …

    I’m kinda sorry to harp on these AI issues; the reason I got into AI was because I wanted to explore that space towards serious, actual AI. The stuff we’ve got now does nothing more than feed our imagination with dreams and hope by passing simple Turing Tests, and as important and fun as that can be, it does not mean it will happen. Feasibility is pure speculation about applicability. Where’s our flying cars, or jetpacks, technology that is infinitely more feasible to pull off?

    To put this in context; machines need to be programmed to be fuzzy and uncertain, while we have that as the basic evolutionary being that we are. Humans and computers have a go at intelligence from opposite directions, and no matter what the result might be of our strive towards machine AI it will never be whatever we think it might be, good or bad. It will be very, very different; our epistemologies will clash no matter what, and we have no real hope of understanding a) how to get there, and b) what it is if it happens, and hence c) we won’t know if it’s happened. It might, in fact, be impossible rather than feasible.

    • says

      Unless the first AI is designed to mimic human minds (which might be easier to pull off than you know) or is based on a map of a human mind (which will depend on advances in brain scan resolution). On that perspective (and why our concerns should be focused on something more crucial than just clashing epistemologies), see my blog Ten Years to the Robot Apocalypse (the title is a joke, but the content is not).

    • David Inman says

      Always glad to see fellow AI skeptics. :) Programmer here and working on a degree in computational linguistics. Looking under the hood has led me to be very skeptical of the idea of a singularity. Although I do think AI is still a feasible and even probable future, but it will probably a slower, more incremental improvement, not a sudden flash of an AI achieving godlike knowledge. It is possible to imagine something like a singularity but it doesn’t look to me like a feasible or realistic possibility.