New on OnlySky: The end of the road for AI


I have a new column today on OnlySky. It’s a followup to my last column, about a massive, unappreciated obstacle that looms ahead for AI technology and what, if anything, can be done about it.

AI has been wildly successful, in both good and the bad senses. Chatbots and artbots are flooding the internet with synthetically generated text and images, often drowning out the contributions of human beings. While their output is frequently flawed, the creators of these bots insist that they’re going to keep improving, becoming more creative and less error-prone, and it’s only a matter of time before they leave human beings in the dust.

However, this may not be true. AI may be about to hit a hard stop, and its very proliferation may be the cause of its downfall.

Read the excerpt below, then click through to see the full piece. This column is free to read, but paid members of OnlySky get some extra perks, like a subscriber-only newsletter:

Of course, these bots aren’t flawless. For all their talents, they sometimes generate garbled text, or false factual claims, or weirdly melted and deformed images. Their creators dismissed these as inevitable early bugs in a technology that’s still maturing and improving. They promised that with more training data, AI would keep getting better, until it could not only match human performance but surpass it.

But there’s a problem: the internet is no longer pristine. It’s been polluted by immense quantities of texts and images generated by these AIs. There’s no reliable way to screen this out, which means that later generations of AIs will be trained on data created by earlier generations of AIs. Because of this, AIs are no longer learning how to be more human; they’re learning how to be more like AI.

You can think of this as the AI version of inbreeding—and it’s a problem for the same reason that inbreeding is harmful in nature.

Continue reading on OnlySky…

Comments

  1. John Morales says

    “AI may be about to hit a hard stop”?

    Nah.

    “Because of this, AIs are no longer learning how to be more human; they’re learning how to be more like AI.”

    They’re not supposed to be human-like!

  2. Snowberry says

    @John Morales #1: “They’re not supposed to be human-like!”

    In order to be truly useful, in most cases, AIs need to be better than the vast majority of humans (or better than any human, but that’s usually bonus and not a requirement) and unaffected by conditions which would degrade human performance. For usages which the general public rarely sees or interacts with directly, there’s no need for them to be remotely human-like; in some cases, that would be rather pointless and/or inefficient anyway. But for the the ones which the general public sees and can interact with directly, a lot of them *will* need to be better at acting human than the vast majority of humans.

    Of course there are people who say “just use humans for public-facing purposes, even if it’s less efficient somehow!” but if you’ve been paying attention to the history of industry in the past couple of centuries, that’s not very realistic.

    • John Morales says

      [Threaded comments here]

      But for the the ones which the general public sees and can interact with directly, a lot of them *will* need to be better at acting human than the vast majority of humans.

      Why? There’s no good reason for it.

      I see nothing wrong with it obviously being an artificial intelligence and inhuman rather than pretending to be a human. In fact, I think transparency should be a priority, myself.

      (I no more expect an AI to act human any more than I expect an aeroplane to act like a bird)

      I note part of the impetus for my comment was the conceit (c.f. Data in Star Trek) whereby AIs are supposedly seeking to be more human. It’s a trope, but a silly one.

      A smaller part is that perhaps we humans are basically the same, just better at it. For now.

      • Snowberry says

        Oh, hey, did you know that there are current experiments with self-flying planes, intended for short-ranged cargo flights? That’s not a public-facing use, so it doesn’t need to be humanlike. And if, someday, we get self-flying passenger flights, that’s still not a public-facing use, because the people who interact with the plane’s mind aren’t the passengers, but the airport staff. The plane can make pre-recorded announcements, obviously, but it’s not like the passengers are supposed to respond back to the pilot.

        Now when it comes to things like personal assistance, hospitality, mental health services, maybe search and rescue, I assume that there are some people who would rather just deal with a cold, unfeeling device rather than a person or a very good facsimile of one, like how supermarket check-out scanners are a boon for people who would rather minimize the risk needing to interact with a cashier. But I suspect they’d be a minority. Though at least you can set your pocket assistant’s personality to “Clippy”…

        The thing which most people don’t talk much about, for obvious reasons, is sexbots. Technically they sort of exist already, in the form of interactive voice AIs inside of a doll’s head. That’s going to eventually lead to intimate companionship robots, which will eventually be able to literally make him (or her) a sandwich, which will lead to robot housekeepers who are considered part of the family à la Rosey from The Jetsons. Though admittedly, that housekeeper will have been “descended” from non-human-like service bots like roombas as much as sexbots, as those will have been developed in parallel. But you might be surprised how much sextech indirectly affects the development and adoption of a lot of other things…

        Now whether any or all of this is a “good idea” is a matter of personal value judgement, but it’s likely coming regardless. Usual caveats about “humans failing to self-extinct” and assuming that there’s no unexpected “hard barriers” to achieving them. (The current batch of AIs poisoning their own training data is potentially a “soft barrier” which will eventually slow progress to a crawl, but that just means that it would be developed differently from that point.)

  3. John Morales says

    Your initial claim was “But for the the ones which the general public sees and can interact with directly, a lot of them *will* need to be better at acting human than the vast majority of humans.”

    Your justification upon being challenged on that is “I assume that there are some people who would rather just deal with a cold, unfeeling device rather than a person or a very good facsimile of one, like how supermarket check-out scanners are a boon for people who would rather minimize the risk needing to interact with a cashier. But I suspect they’d be a minority.”

    What you initially claimed was a need now becomes an assumption that only a minority can cope with machines that do not try to be better at acting human than the vast majority of humans.

    (BTW, https://en.wikipedia.org/wiki/Teledildonics )

    • Snowberry says

      I’m not claiming that most people can’t cope with “unfeeling machines” (and even gave the caveat that there are a minority who would prefer dealing with that) but implying that most people would prefer not to. And of course the obvious response would be “just keep using humans” but that would be fighting against the entire history of industrialization. If an industry can find a way to expand while using fewer employees, it will. Even, sometimes, if the short-term costs are high and the long-term benefits are uncertain.

      I know what teledildonics are, are you implying that sexbots will never take off in favor of people having sex in virtual worlds? Because there’s a lot of audience overlap for both of those things, and also there are currently some early experiments in using AI in virtual sex worlds to run NPCs (Non-Player Characters), so…

      • John Morales says

        I know what teledildonics are, are you implying that sexbots will never take off in favor of people having sex in virtual worlds?

        No. I’m implying humans don’t care all that much whether what’s stimulating their bits and their senses is a machine. Sex dolls are already a thing.

        Your more substantive claim, that you are “implying that most people would prefer not to”, is a tad different from your initial claim, and yet I still don’t concur.

        Consider:
        When ATMs came along, did people freak out that there was no human attendant?
        When self-checkouts came along, did people freak out that there was no human attendant?
        When Siri/Alexa came along, did people freak out that there was no human attendant?

        We humans are very adaptable — even some of us oldies.

  4. sonofrojblake says

    The central idea of the piece is flawed, in that it is based on the idea that in order to be successful, the AIs have to keep getting better. They don’t. All that has to happen is that the environment around them becomes less friendly to human creativity, and friendlier to AI. And since there’s a LOT of money tied up in making sure AI “works”, would you bet against that happening?

    I recently watched a video describing how self-driving cars are going to destroy cities. Yes, it’s a clickbaity title, but I had a car journey to do (irony!) so I listened to it as I drove. And it’s depressingly persuasive. https://youtu.be/040ejWnFkj0?si=YkE8vIt6MEojX0Qc

    While it is focused on autonomous vehicles, I believe the argument can be extended somewhat to the application of machine “intelligence” to other areas. Check it out, and start working out how to resist.

    Me – ten years ago I expected and wanted to own a self-driving electric car by now. Five years ago I gave up on the self-driving bit any time soon, and as more and more of my friends made their first moves into EV ownership, my attitude to them hardened. Now, I’m set against ever getting into, much less owning, any vehicle that describes itself as “autonomous” and making plans to ensure I have a mainly ICE-powered car for the rest of my life. I strongly believe everyone should be doing this because neither of these technologies is anywhere near ready, and the desperation of the businesses behind them to force them on us prematurely is already causing problems that are only going to get worse.

    • John Morales says

      https://slate.com/technology/2024/12/waymo-self-driving-cars-autonomous-vehicles-ai-austin-texas.html

      I went into the app and requested a car to complete the rest of my journey. The app slowly loaded. “It looks like you have the same car,” the agent said. “I’m seeing a blue backpack behind the driver’s seat.” He was using the camera inside the car to see my things. I breathed a sigh of relief. Had I not gotten the same car, my backpack hopefully would have turned up in Waymo’s lost and found. That wasn’t a sure thing, though. “You got lucky this time,” the customer service representative informed me. Next time, I’d heed the car’s reminder to take all bags, phones, and wallets before exiting.

Leave a Reply

Your email address will not be published. Required fields are marked *