John Oliver on the state of AI


He uses the current buzz around the use of AI in ChatGPT and Google and Bing search engines to look more broadly at the current state of AI and where it might be headed.

I had a list of four things that always seemed to be ten years away: AI, fully self-driving cars, sustainable fusion energy, and quantum computers. It is not that there have been no advances in these areas. Each field has advanced considerably but the delivery date for the fulfillment of all they promised keeps moving back as new difficulties aree encountered.

Recently I have been wondering whether AI has advanced far enough to be removed from the list. This judgment depends of course on what criteria one uses. The highest criterion for AI, that it achieves sentience (similar to HAL in 2001: A Space Odyssey) hasn’t been reached yet though it is getting close to where we might not be sure if it has reached it (by passing some version of the Turing test) or not.

I think I will keep it on the list for now.

Comments

  1. JM says

    I’m not sure how much of a real improvement things like ChatGPT are. To a large extent they are designed to mimic intelligence without being intelligent. A lot of what they do is search the internet and craft sensible seeming responses around what they find. All of them would fail a Turing test done by a reasonably clever person. Simple things like asking them the same question multiple times, feeding them obviously incorrect information, asking them questions about opinions and such can easily produce non-sense responses. Asking them technical questions will often produce incorrect responses rather then saying they don’t know.

  2. xohjoh2n says

    @1:

    All of them would fail a Turing test done by a reasonably clever person.

    But that’s just the No True Scotman’s Turing test -- whenever a machine passes you can always say “but a *cleverer* person would have been able to tell the difference.

    produce non-sense responses […] produce incorrect responses rather then saying they don’t know

    And of course a human is quite capable of those too -- some excel in them!

    (Which just reinforces the point that the Turing test is a really bad test -- it’s not telling you anything about what you really want to know, and helps obscure the fact that it’s doing that.)

  3. Deepak Shetty says

    Recently I have been wondering whether AI has advanced far enough to be removed from the list.

    It depends on how you define it. A general purpose AI ? Anything remotely resembling what we see in sci-fi? Then no.

    @JM @1

    All of them would fail a Turing test done by a reasonably clever person.

    So what ? American conservatives would fail most intelligence tests and be proud of it (Freedom!)

  4. Holms says

    Regarding “the current state of AI and where it might be headed”: we’re in a for a royal shitshow due to the increasing potential for ‘deep fake’ faces and voices to convincingly impersonate people. The ability to impersonate is already able to fool people not familiar with the telltales of deepfakes, and it is only going to get better from here.

  5. says

    we’re in a for a royal shitshow due to the increasing potential for ‘deep fake’ faces and voices to convincingly impersonate people.

    Perhaps. Or people might adjust their assessment of the truth-value of certain forms of media. Forgeries have been a longstanding problem. The first photographs were manipulated (e.g: Nadar). Autotuned vocals are literally fake but everyone seems to accept that a form of performance art is dancing around lip-syncing to a prerecorded track. Deep fakes? People will learn they’re fake and they will learn quickly.

    Amusingly, I know someone who was recently shocked to learn that most “influencers” use filters and photoshop. I asked how that fits on a continuum of knowing the same people have also had extensive plastic surgery.

    My expectation is that we’re going to briefly be deluged with “influencers” who don’t exist. So fucking what? Kim Kardashian barely exists either, as far as I am concerned. It’s all multiple layers of fakery.

    Here is a thought: we see far fewer pictures of bigfoot or flying hubcaps, because the population has come to realize that such fakes are easy. People don’t fall for them so much anymore. They’re comfortable seeing people murdered, blown up, flying broomsticks and now they grow up knowing that’s not real. Deepfakes? We live in one.

  6. Pickled Tink says

    These programs are NOT AI. They are procedural engines specifically designed to generate output to that parses close to human in order to appear as such, backed up with deliberately misleading, hyperbolic, and breathless venture capitalist PR and equally breathless and incredibly credulous reporting by news media who no longer put in any effort to investigate a story behind regurgitating a press release and getting a few quotes.

    I suggest you download Dwarf Fortress and run world generation in it. That is an incredibly sophisticated procedural engine that generates an entire world from its geology and climate all the way up to its history, with politics, wars, and even trade that, during generation, is tracked down to individual items in a meaningful way (Check the developer logs on the site for the 15th of May 2019 and the 22nd for a detailed breakdown when tracking a bug). It is a remarkably capable and believable engine, but it is not AI.

    The people making these procedural engines are not going about it the right way to create AI. It’s a dead end simply because their focus is on generating believable output, not on generating something that can evaluate information and, for want of a better term, consider a query before generating actual output. This is because the Venture Capitalist mindset demands marketable results for a return on investment.

    I guess the best analogy would be “We want to have Bees. Bees make Hives, so we shall 3d print a Hive so well people many people would believe Bees made it.” In this case, they are making dumb procedural engines that (badly) impersonate what you would get from an AI. It’s a dead end, and while useful for getting rid of human workers it doesn’t have any of the factors needed for an actual intelligence and they do not actually appear to be working on that aspect, just on becoming ever more capable of sophisticated fakery.

  7. sonofrojblake says

    they are designed to mimic intelligence without being intelligent

    Like… oh, the list of things I could put here for satirical effect. Politicians. Stephen Fry. Jordan Peterson. Ach, write your own punchline.

    All of them would fail a Turing test done by a reasonably clever person

    Interesting. The Turing test does not specify that the human tester needs to be reasonably clever. If a computer can convince a dumb person it’s conscious… doesn’t it pass?

    @Pickled Tink, 7:

    These programs are NOT AI.

    This comes down to definitions, and the definition of AI seems to me to be “whatever we can’t do yet”. Computers can’t produce a sonnet… oh, hang on, they can, but it’s not AI. They can’t play chess… oh, hang on, they can, but it’s not AI. They can’t beat the world champion, though… oh, hang on, they can, but it’s not AI. They can’t recognise voices and understand spoken commands… oh, hang on, they can, but it’s not AI. They can’t go on a human quiz show and win… oh, hang on, they can, but it’s not AI. They can’t drive a car… oh, hang on, they can’t… but when they can, that won’t be AI either.

    It reminds me of the God-of-the-gaps thing. Slowly we’re whittling away the things that only humans can do better, or at all, and every time the machines start doing something better, or at all, we wave our hands and say “but that’s not AI”.

    An impersonation that’s convincing enough in its domain is a functional replacement for whatever it’s impersonating. That it’s an impersonation is irrelevant.

  8. No Respect says

    Marcus Ranum:

    Perhaps. Or people might adjust their assessment of the truth-value of certain forms of media. […] Deep fakes? People will learn they’re fake and they will learn quickly.

    [Looks at credulous MAGAs and laughs] As if that will matter to those who will believe whatever confirms their worldview.

    Deepfakes? We live in one.

    Oh, nihilists are so adorable. And pathetic. And in need of euthanasia. Wait, does that mean that I’m a nihilist then? No, I lack the “adorable” part. Phew.

    Failedabortionofrojblakeswife:

    An impersonation that’s convincing enough in its domain is a functional replacement for whatever it’s impersonating. That it’s an impersonation is irrelevant.

    Wrong and False. Roj Blake must be the most disappointed father in the world.

  9. says

    Oh, nihilists are so adorable.

    Hyperbole fail. Which is funny since you appear to be dealing in hyperbole.

    I’m not saying we live in a simulation, it’s that we are constantly being bombarded with manipulated and outright fake information. It’s called “marketing.”

    No you can go back to being “witty.”

  10. sonofrojblake says

    Wrong and False.

    Well, there’s certainly nothing I can say that can stand as an rebuttal against that kind of well argued, comprehensively evidenced position.

  11. Holms says

    #6 Marcus
    Fakery that hides skin blemishes is not even close to what I was talking about.

  12. Deepak Shetty says

    @Pickled Tink

    These programs are NOT AI. They are procedural engines specifically designed to generate output

    You are perhaps limiting your description to ChatGPT. But in general , some of the problems AI currently solves , the code you write is not procedural -- the model is not procedural (even if the way you trained that model or the code underlying it is procedural but then you may as well as everything is 1 and 0). An easier example to understand is photo recognition. Forget about the coding language -- just sketch out an algorithm that given a digital representation of something -- say the picture of a cat write a classifier that identifies whether a cat is present or not in that representation. What does a procedural algorithm of “Cat” in a photo look like anyway (This example straight from an Introductory AI course) ? -- you really cant specify a procedural algorithm. Now look at Convolutional neural networks that can write such a classifier remarkably well if you have sufficient amounts of supervised data- they dont resemble anything like a procedural engine. The debate though is whether such things are “intelligent” as we understand that term -- and the answer is obviously not. But then , ask the hard determinists here -- if all of us just follow our programming , what really is Intelligence and why would that not apply to such algorithms?

  13. JM says

    @8 sonofrojblake and @3 xohjoh2n:

    Interesting. The Turing test does not specify that the human tester needs to be reasonably clever. If a computer can convince a dumb person it’s conscious… doesn’t it pass?

    The issue with applying the Turing test to these Chat bots is that to a certain extent they are designed to cheat at the Turing test. They are not general AIs being put through one specific test. They are designed to fake human conversation and do it by drawing upon the huge amount of human generated text that already exists on the internet.

    @8 sonofrojblake:

    This comes down to definitions, and the definition of AI seems to me to be “whatever we can’t do yet”.

    There is an element of that to the problem. At the same time part of why the target keeps getting moved is that people find more advanced ways to solve problems without AI. For a long time Chess was the target of AI but eventually increasing computational speed reached the point that a straightforward algorithmic solution can beat all but the best humans and no human can beat dedicated hardware.
    The grail of AI is a general intelligence. One that can be fed the basic rules of Chess and some example games and it can work out how to play well on it’s own. The sort of chat AIs that are being looked at right now are not hard programmed math processors but they are not general AIs.

  14. grahamjones says

    @16 JM
    “The grail of AI is a general intelligence. One that can be fed the basic rules of Chess and some example games and it can work out how to play well on it’s own. ”

    AlphaZero achieved this (though it doesn’t need any example games -- it makes its own).

Leave a Reply

Your email address will not be published. Required fields are marked *