2:52


Émile P. Torres just pointed out the existence of this 5 year old video.

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.

It’s 10 right-leaning white men dressed in black suits who have a history of stirring up fear to their own profit (or, in the case of Tallinn for instance, dismissing credible concerns about climate change for his own profit) clumsily sharing too few microphones to make up some science fiction shit. The panel is titled Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds. I’m done already after seeing the title and lineup, but I’ve always wanted to witness hell, so I watched a little of it. Very little of it.

I made it to the 2:52 mark before I said, “aww, hell no, fuck this” and bailed out. Years of dealing with creationists has given me a high tolerance for bad bullshit, but this was too much for me. How far can you get?

Comments

  1. birgerjohansson says

    If I want rubbish then I can watch a Jspanese film with rock music , alien invasions, a demon and some goddamn zombies*, and that film is far more enjoyable than the fantasies these guys make up.
    Besides, might makes right so if the AIs win, more power to them.

    *that film exists for real.

  2. Reginald Selkirk says

    Adjusted Gross Income?
    I am guessing “AGI” means Artificial Intelligence. Perhaps the G is for “General”?

  3. jo1storm says

    Curse you! Why are you doing this to me?! I watched it when it came out. Don’t remind me of my wasted time. I want that hour back. Pretty please?

    The grift is basically: There are some jobs that only humans can do right now because of their intelligence. If we invent artificial intelligence then we can program it the way we want. Any way we want. That allows removing all ethical boundaries and, for the low low LOW price of hardware which is getting cheaper by the day, owning unlimited number of very obedient slaves.

    The idea makes their peepees hard and makes their mouths start salivating. Basically, they can’t wait until it is possible. They would like it very much.

  4. Dunc says

    If we invent artificial intelligence then we can program it the way we want.

    One of the defining characerstics of intelligence is that you can’t program it the way you want, because it is able to come up with its own ideas. That’s kind of the whole point.

  5. Rich Woods says

    @Dunc #9:

    Shh! These are Great Minds! Don’t interrupt them while they are telling us how the future is going to be. Who knows what stray, transient thought might thus go unexpressed, dooming humanity to an early and painful destruction?

  6. says

    Elon is currently trying to exploit the turmoil in Iran in order to sell his satellite dishes. I’m sure he thinks this is good PR that’ll make him look like a generous champion of freedom. Bleh.

  7. jo1storm says

    @Dunc “One of the defining characteristics of intelligence is that you can’t program it the way you want, because it is able to come up with its own ideas. That’s kind of the whole point.”

    Yeah, but you can fiddle with the starting assumptions and “guide” the learning process. Don’t ask me how is programming AGI supposed to work. The image I got from that segment is Robert Picardo’s Star Trek Voyager holographic Doctor with his ethics subroutines removed.

    That’s the reason I described it as grift.

  8. PaulBC says

    jo1storm@8 It’s not an original idea, and sounds like Westworld the way you describe it (note: I have never watched that show so I’m going by description).

    I have nothing against futurist speculation, but what gets me is when self-proclaimed geniuses spitball ideas that have been done to death in science fiction and act like they’re saying something original.

  9. says

    I’ve worked for years with automated machinery. I have ZERO fears of Skynet because I know underneath that Terminator’s skin is the same consumer grade crap every machine I’ve ever worked on had and, trust me, fails on a regular basis. The same Keyence sensors. The same MAC valves. The same FESTO garbage that fails like clockwork at 1000 hours.

    Think about it like this. The meat machine humans work fine for around 60-80 years before they start to break down. Have you ever owned a car that lasted that long? A phone? A computer? Our most basic tech is only good for a decade or two. The more complicated, the less reliable. Somehow biology and evolution solved this problem.

    Any human who thinks they are going to be a god in the future has no idea how anything works. Machines are human made constructs and will never be as good as the meat machines we already have. Wake me up when they invent an AI that can read my chicken scratch.

  10. says

    Also some of you may be familiar with Conspiracy Catz. He’s a YouTuber and a flerf debunker. He’s also a science teacher in England is is incredibly funny. in his most recent video, he used AI to write and illustrate a short story. The results are HILARIOUS.

    Granted this is consumer grade AI he’s using but it illustrates the point. The most dangerous thing we can do with AI these days is assume it’s functional. We use a very basic form of AI every day in the form of autocorrect and auto fill in our google searches. It’s just barely reliable enough to find you a recipe for chicken marsala.

    All this AI apocalypse, singularity nonsense is a lot further down the road than any of these numbskulls think. To anyone reading this in 2022, don’t worry, we’ll all be dead long before any of that happens, and IF it ever happens.

  11. robro says

    I recently read this article in Scientific American: Artificial Intelligence Needs Both Pragmatists and Blue-Sky Visionaries. While the author, an emeritus professor of computer science at the University of Maryland, says AI needs both, he’s really saying we need the pragmatists and basic researchers. We have plenty “blue-sky visionaries” hyping the promises of AI with demo-ware to get investment money. Meanwhile, more of the fundamental research is going into corporations so it doesn’t get peer-reviewed. In my current job, I stumbled into the AI matrix a couple of years ago and so far there’s a lot of work compiling and analyzing data, building systems, a few POCs but we still don’t know how it will work. or have specific use cases to apply it to. But there’s a lot of hope that the voodoo will be super helpful.

  12. birgerjohansson says

    Robro @ 17
    I can think of two individuals who got their blue-sky speculations right (I do not metion Saint Arthur of Clarke, he is taken for granted).

    The first is Freeman Dyson; his The Sun, The Genome and the Internet from the 1990s was 100% on the money.

    The other is the author and polymath Stanislaw Lem. His 1970s prediction of a software failure mode in Ananke precisely describes the failures of the two first prototypes of JAS 39 Gripen 15-20 years later.
    Cut off behind the Iron Curtain, he invented the concept of nanotech independently of Feynman, and he predicted the concept of simulated reality (aka phantomatics), both in his 1966 Summa Technologia .
    Re. conspiracy theories as explanations it is fun to revisit his The Chain of Chance from the late 1970s.

  13. birgerjohansson says

    Addendum; Lem was misogynic, having grown up as a secular jew in the very conservative Catholic Poland 1920-1939. But he got the science right.

  14. PaulBC says

    Ray Ceeya@15 I agree we’re unlikely to see any of this in the foreseeable future, but I disagree with your reasoning.

    The constraints of self-repairing human beings and mechanical constructs are very different. The big difference is that while you really can’t “upload” human conscious into a computer (and there are no reasonable plans for changing this), an android gives you an opportunity to decouple hardware and software. When individual parts degrade, you just rebuild the body and transfer the software. Hardware may always be unreliable, but there are already techniques (error correction, checksums) for maintaining information integrity.

    I think (assuming much smarter AI than we have now or may have any time soon) that parts unreliability is a non-issue. You use cheap and potentially faulty parts with a lot of redundancy.

  15. Jazzlet says

    Ray Ceeya @15
    Basic technology does last, eg. I still use a hand powered mincer (grinder) that was made before I was born sixty some years ago. It’s when technology goes beyond the basic that it doesn’t usually last so long, but that can be a choice by the manufacturer, as in bult in obsolescence.

  16. unclefrogy says

    the title alone turned me off
    “Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds”
    right lets pick a bunch of winners and players from Las Vegas fro economic analysis
    one of the things that has always bothered me about all the speculation about the future save in good fiction writing is the assumptions that are made without any of the caveats one finds in fiction.
    Civilization and history will continue as it is right now when even a cursory look at the history of the last few 10’s of thousands will show that things change repeatedly. We are now for the last few hundred years living in a time where the emphasis is on money and technology is just another way to make money and the technology we make and use is heavily influenced by “the market”. What we do is culture and that is made up of agreements between human beings there is very little that is written in stone.
    If we ever develop “supper intelligent machines” that can repair maintain and modify themselves we are safe. If we do the future these panelists invasion they will likely turn much of the earth an open pit slag heap.

  17. snarkrates says

    One reason I don’t think we need to worry about an AI apocalypse just yet is because there really isn’t much point in developing an AI that thinks like humans. Humans can already do that. What we need are AIs that think like we cannot–this is where AIs have pulled off their greatest coups–like beating a human at Go.

    We have a pretty good “general intelligence” at least on those rare occasions where we use it. What we need are AIs tailored to the tasks we do not do well.

  18. PaulBC says

    snarkrates@24 I’m not seriously worried about an AI apocalypse in the foreseeable future. On the other hand, I don’t see how machines that don’t think like humans are any less dangerous.

    The economic incentive for AI comes down to automation in the broadest sense. It’ll be a while till we get to fully automated supply chains, but there is nothing in principle from having everything from mining to manufacturing to distribution done by computers. This includes the production of the same autonomous machines that carry out these operations, so in principle, you could have a large-scale von Neumann replicator in the form of a global supply chain.

    In practice, we still employ people (more than we need most likely) because labor doesn’t appear to be all that expensive when you can shift it around the world. There’s not a lot of reason to go for pure automation as long as there are people, and there are political constraints as well (the situation becomes very different if you want to exploit off-world resources).

    A lot of “knowledge workers” are vulnerable to losing their jobs to automation, often more so than service workers such as plumbers who need human experience and flexible thinking in uncontrolled environments, unlike, say, a software developer who may get by doing one thing well with standardized equipment. (I consider it a failure of software engineering that I’m still employable, TBH, but eventually that will change.)

    So imagine a dystopia consisting not of human-equivalent AI but simply reliance on a massive infrastructure that is driven by computers. Are humans still in control? We probably don’t mind too much if distribution of goods is automated. How about marketing? We think that’s something humans do, but we are already targeted (often clumsily) by algorithms with no human intelligence at all.

    So there’s this huge system that is catering to the needs of humans, some of which it may be creating itself through targeted advertising. Assuming the continuing existence of capitalism (which is a poor fit to this scenario), you also have nominal human “owners” of capital who effectively just have special privileges of resource allocation but provide no actual value. Even without anything like human intelligence, is there a point at which such a system will interpret human beings as parasitic and attempt to optimize us out of the loop?

    I mean, not a new idea, nor even a likely idea. My point is just that we have plenty to worry about even if we dismiss the idea of a computer passing the Turing test.

  19. says

    @21 Jazzlet
    Right?!? You see my point. After thousands of years we perfected the electric stove. I’m extending your kitchen analogy. We have functioning microwaves. The refrigerator is AMAZING. This is the level of reliability any android/robot would have to achieve before it could replace a human. Also androids are overrated. Once I see a set of robotic hands and a pair of robotic eyes change out the batteries in a TV remote, then I might be concerned.

  20. unclefrogy says

    Even without anything like human intelligence, is there a point at which such a system will interpret human beings as parasitic and attempt to optimize us out of the loop?

    here is one of the things that is usually glossed over. What is the point of the machine existing at all?
    Humans being biology have a desire to survive and reproduce. We also have great curiosity about what is life what is existence. We have great attraction and affection for our fellows in a word we are driven by emotions which have their roots in biology.
    Why would machines if they achieve sentience have any of that?
    It sounds like our ideas and fears about the machines is like our ideas and fears about extra-terrestrial intelligence mostly projections “monsters of the Id”

  21. PaulBC says

    unclefrogy@28

    What is the point of the machine existing at all? Humans being biology have a desire to survive and reproduce.

    Sure. It would need some kind of drive to be a significant threat, but for purposes of dystopian fiction, that’s not hard. The original point of the global automated system would be to provide for humans, but it would still have some need to optimize efficiency and reduce waste. It just has to reach the conclusion that humans are the primary cause of the waste. That’s its “point”.

    Early SF writers were obviously aware of too-literal androids and Asimov introduced the “laws of robotics”. But this is built on a rather silly premise that catering to humans could be embedded so deeply into the architecture that it would be impossible to change. It’s just software, and easily modified whether through malice or chance.

    Fantasies aside, the question of liability is real. As soon as computers are making decisions that affect life and death (as with self-driving cars) we are living in a very different world. Take it to its logical conclusion, and you can’t rule out SkyNet. The computers don’t have to pass the Turing test or resemble human intelligence in any way. They just have to be in control of production and driven by some optimization goals that may have unintended consequences.

  22. John Morales says

    PaulBC,

    Take it to its logical conclusion, and you can’t rule out SkyNet.

    Fantasies aside no less and no more, take it to its logical conclusion, and you can’t rule out the Culture.

  23. Jim Balter says

    I have a lot of issues with Chalmers and his property dualism and panprotopsychism, but he isn’t right-leaning, doesn’t have a history of stirring up fear to his own profit, and isn’t even wearing a suit. (Actually, few of these guys is wearing a suit.)

  24. unclefrogy says

    @30 no, absolutely no need for ‘supper intelligence” just dumb black boxes in charge of things that are critical is enough to get us all killed.

  25. PaulBC says

    JM@31

    Fantasies aside no less and no more, take it to its logical conclusion, and you can’t rule out the Culture.

    And I do not.

  26. unclefrogy says

    @31
    yes culture has proven itself capable of some colossal mistakes in the past and we are living through the effects of some of those mistakes today which are bad enough that the current culture may not even survive much less change.

  27. Dunc says

    PaulBC, @ #25:

    [I]magine a dystopia consisting not of human-equivalent AI but simply reliance on a massive infrastructure that is driven by computers. Are humans still in control? We probably don’t mind too much if distribution of goods is automated. How about marketing? We think that’s something humans do, but we are already targeted (often clumsily) by algorithms with no human intelligence at all.

    So there’s this huge system that is catering to the needs of humans, some of which it may be creating itself through targeted advertising. Assuming the continuing existence of capitalism (which is a poor fit to this scenario), you also have nominal human “owners” of capital who effectively just have special privileges of resource allocation but provide no actual value.

    There is an argument that this has already happened, but instead of computers, it’s run by “slow AIs” we call “corporations” and “markets”.

  28. macallan says

    @2

    If I want rubbish then I can watch a Jspanese film with rock music , alien invasions, a demon and some goddamn zombies*, and that film is far more enjoyable than the fantasies these guys make up.

    That’s … about 50% of all anime, isn’t it?