David Brin reviews some recent books on the future of artificial intelligence. He’s more optimistic than I am. For one, I think most of the AI pundits are little better than glib con men, so any survey of the literature should consist mostly of culling all the garbage. No, really, please don’t bring up Kurzweil again. Also, any obscenely rich Silicon Valley pundit who predicts a glorious future of infinite wealth because technology can just fuck right off.
But there’s also some stuff I agree with. People who authoritatively declare that this is how the future will be, and that is how people will respond to it, are not actually being authoritative, because they won’t be there, but are being authoritarian. We set the wheel rolling, and we hope that we aren’t setting it on a path to future destruction, but we don’t get to dictate to future generations how they should deal with it. To announce that we’ve created a disaster and that our grandchildren will react by creating a dystopian nightmare world sells them short, and pretending that they’ll use the tools we have generously given them to create a glorious bright utopia is stealing all the credit. People will be people. Finger-wagging from the distant past will have zero or negative influence.
Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!
They lectured and chided. They threatened damnation and offered heavenly rewards. Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Judeao-Christian-Muslim laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question:
“How’s that working out for you?”
In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators, parasites and abusers –– just as it won’t divert the most malignant machines. Indeed, moralizing often empowers them, offering ways to rationalize exploiting others.
Beyond artificial intelligence, a better example might be climate change — that’s one monstrous juggernaut we’ve set rolling into the future. The very worst thing we can do is start lecturing posterity about how they should deal with it, since we don’t really know all the consequences that are going to arise, and it’s rather presumptuous for us to create the problem, and then tell our grandchildren how they should fix it. It’s better that we set an example and address the problems that emerge now, do our best to minimize foreseeable consequences, and trust the competence of future generations to cope with their situations, as driven by necessities we have created.
They’re probably not going to thank us for any advice, no matter how well-meaning, and are more likely to curse us for our neglect and laziness and exploitation of the environment. If you really care about the welfare of future generations, you’ll do what you can now, not tell them how they’re supposed to be.
The AI literature comes across as extremely silly, too.
What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”
Jesus. We don’t have these “intelligent machines” yet, and may not — I think AI researchers always exaggerate the imminence of their breakthroughs, and the simplicity of intelligence. So this guy is declaring that the big concern of this century, which is already 1/6th over, is an ethical crisis in dealing with non-existent entities? The comparison with religious authorities is even more apt.
I tell you what. Once we figure out how to coexist safely and ethically with our fellow human beings, then you can pontificate on how to coexist safely and ethically with imaginary androids.
robro says
“Elon Mush” Amusing. Typo or satire? Let your newspaper reading robot be the judge.
I would take some issue with the “well-meaning” part. Perhaps some were well-meaning, but many were shills for the local potentate trying to keep the restive horde in line. The Bible is replete with this tale, and we have other more historical examples of the uppity-ups using religion, even inventing religions, to shore up their shaky position. Taken on a historical scale religion has been little more than a propaganda machine for the powerful.
As for AI, it is already being used in business contexts, you might even say “extensively.” For example, natural language processing and “machine learning” are key components for deriving analytics about customer behaviors. Despite that ability to assess what customers are doing to determine what they want or need, the human brain is often used to prime the pump for the AI machine, and of course, make decisions based on the information derived from the machine.
So, I’m not too worried about the world being taken over by robots in my lifetime (but I am fairly old). And yes, we should spend more of our time learning how to limit the impact of our activity on the environment before we invest a lot of energy into fretting about a future robot world. Like fears of zombies, I think its safe to leave that to Hollywood for now.
bittys says
@1 robro: As for AI, it is already being used in business contexts, you might even say “extensively.”
See, this is the thing about AI that I think we should be worried about. It’s not that we’re suddenly going to invent a general purpose AI that’s going to enslave us all. What is looking likely is that we’ll automate away the vast majority of actual jobs that we need people to do, and at that point we’re going to need a fundamental change in our economic systems because there simply won’t be enough work for all the people.
Ideally we’d have sorted this out before any actual revolutions start, but I don’t have much hope in that.
fmitchell says
The real problem isn’t the rise of “intelligent” machines; it’s how many tasks — including “knowledge work” — can a dumb machine do as well or better than humans. Expert systems encode the heuristics human experts, from mechanics to doctors, use to diagnose and fix problems, and apply those rules far more consistently and effectively. A lot of office work is shuffling paper and retyping data from forms, but end-to-end workflow software replaces the clerks and couriers with a VPN and an app that allows the decision-makers to review and approve documents. Even China, the financial powerhouse built on the backs of nearly a billion workers, has decided that its people cost too much.
I look at the future, and decide Vonnegut’s Player Piano wasn’t dystopian enough.
First they came for the assembly line workers, but I said nothing, for I was not an assembly line worker …
Beatrice, an amateur cynic looking for a happy thought says
bittys,
Agreed. I also share your pessimism. We still haven’t changed pension systems in places where the population has been steadily growing older for years, there isn’t much hope we will get any quicker with that.
numerobis says
The future robot world is already here though. Intelligent machines exist, and have existed for some time. It is a thing to think about — questions like how learning algorithms learn our racism and amorally apply it. Or questions like the brittleness of relying on AI — what happens when the power’s out and we no longer have anyone trained to run the system (sometimes it goes quite poorly, e.g. Air France Flight 447).
Dunc says
Permanent structural mass unemployment has been a constant feature of most “developed” economies since at least the mid-70s. We don’t seem to have cared much up until now. Maybe we’ll start caring once it starts affecting respectable, white-collar, salaried folks, but I’m not about to hold my breath…
penalfire says
The problem starts with the term itself. These are human instructions
carried out by computers. No reason to ascribe human qualities like
“intelligence” to programming code.
I don’t see an A.I. trying to enslave humans until it has an artificial
central nervous system. Are researchers anywhere close to developing that?
In terms of “robots taking our jobs,” Dean Baker has done a lot of good
work in this area. Productivity has gone way down the last couple years. If
robots were taking our jobs, we’d be seeing huge productivity growth
numbers, e.g., 10-20% per year. I think we’ve been experiencing numbers as
low as .5%, far lower than it was in the 1960s.
In reality we have the exact opposite problem. Not enough robots are taking
our jobs.
. . .
The A.I. alarmism is just another tactic to treat driving down wages and
upwardly redistributing income like the weather.
But the threat of A.I. is no more real than the free market.
remyporter says
I disagree. We’ve been living with them for a few decades. Machine intelligence won’t look anything like human intelligence because machines aren’t human.
Dunc says
penalfire:
This is a common assumption – that if we develop AI, it will be good (in the practical sense, not the moral one). I’m not convinced that’s the case – it’s entirely possible that we’re seeing low productivity growth because we’re investing heavily in automation that doesn’t actually work very well, but demoralises the remaining workforce.
This is related to a point that I was going to make in response to robro earlier, but didn’t:
Yes, they are, and they’re frequently terrible at it. And even when the analytics are valid in their own terms, they often turn out to be useless for their intended purpose, because it turns out that people are complicated.
The thing you have to remember about the IT industry is that a lot of our output is actually dreadful, and only gets adopted through a combination of heavy marketing bullshit and outright coercion. A lot of IT investment is completely wasted on fads that go obsolete before they’re even fully implemented, and systems that never actually deliver on their initial promises. Often, the shiny new all-singing-all-dancing system is actually significantly worse than whatever it’s supposed to replace, and only gets adopted because we turn the old system off.
As an engineering discipline (ha!), we’re about where cathedral building was in the 13th century: promising revolutionary works of unbelievable grandeur, then taking a hundred years to build something that frequently collapses shortly after completion. (Pre-completion collapses are not unusual either…)
There’s an article of faith here, that anything new and shiny and complicated must be better than anything old and dull and simple. Well, a lot of the time, the old, dull, simple stuff has the critical advantage of actually working…
No, I’m not bitter, why do you ask? ;)
birgerjohansson says
“Machine intelligence won’t look anything like human intelligence because machines aren’t human.”
This is a point the late Stanislaw Lem made over and over again, 30-40 years ago.
Jake Harban says
Ah, intelligence.
We may not be able to define it, but we’ll surely be able to construct it from scratch within a few decades.
slithey tove (twas brillig (stevem)) says
Sounds to this Whonian to be Cybermen Are Upon Us, “ready .|- to -|’ be ‘|’ upgraded | ???”
waiting for that BlueBox to appear
slithey tove (twas brillig (stevem)) says
re 12:
tpyo:
WhonianWhovianleerudolph says
Jake Harban @11: “Ah, intelligence. We may not be able to define it, but we’ll surely be able to construct it from scratch within a few decades.”
Since you bring up decades…
For the first four years of the 1970s, I hung out a lot at part of the AI Lab (at MIT). It had been only a few years since the occasion (by then already part of folklore) on which Marvin Minsky had suggested to some graduate student (name forgotten, at least by me) that a good summer project would be “computer vision” (yes, the whole thing). There were still plenty of people around (at least, in the part of the world I could observe) who thought (with what seemed to me, in my hopeful and naïve youth, to be good reason) that an excellent motivation for developing and studying artificial intelligence (and consciousness) was to better understand human intelligence (and consciousness). In the fall of 1974, I started my first job as an mathematics instructor at Brown University, where at the welcoming reception for new faculty I ran into Ira Magaziner (whose wikipedia page seems to indicate he must have been there in some other capacity…hmmm); when I asked him what he was interested in, he said “intelligence” and I replied enthusiastically “human or artificial?” only to be crushed when he indicated that he meant spy stuff.
In the next decade things continued to go downhill and I soon began to mutter that what was being peddled as “artificial intelligence” was actually just “artificial expertise”. I’m still muttering that. From 2006 through 2010, working with a computer scientist on applications of (my kind of) topology to (her kind of) robotics put me back in contact (at robotics conferences) with part of the present AI community. It was a lot of fun in its way, but based on that sample (which included, e.g., top people working on autonomous vehicles) I can’t say I see much evidence that understanding human intelligence and consciousness—except (perhaps) in the most reductive possible ways—is still a concern of AI research.
Oh, well.
Anders Kehlet says
@leerudolph: Have you heard about SpiNNaker? It’s pretty cool. Video
penalfire says
Indeed.
Might turn out that HoloLens plumbing tutorials are less efficient than
plumbers.
Yeah, everyone is forced to use Microsoft Word, not Emacs, even though
Emacs is vastly incomparably superior in functionality — and free (libre
and gratis)!
unclefrogy says
until we manage to make machines that can make and maintain themselves I will not be too worried about advanced AI.
It looks to me that the thing we need to think about at least in the short term of a few generations is the economic, social and the resulting political implication of the adaptation of advanced AI and increasing automation.
It will not go well if we replace people with machines and not provide anything for those who are now unemployed and now consigned to poverty. If people derive no, zero benefit from a society they will have zero loyalty to that society. Just how long can any society maintain any stability with an increasing number of the population that have no stake in that society and derive little benefit from it? It looks to me that stability is one of the essentials to an advanced technological societies development and maintenance.
uncle frogy
springa73 says
I agree that in the near term of the next 20 or 30 years the big issue won’t be AI in the sense of fully intelligent, autonomous machines, but rather the kind of less advanced AI that can automate a lot of things and make many jobs redundant. Historically advances in technology have created new jobs while eliminating older ones, but will this still be true if increasingly powerful AI can take over an increasing array of jobs?
A long time ago, I saw an old movie from the 50s whose title escapes me. The story was about a factory that used robots to replace factory workers. The manager of the factory was delighted with the faster, more accurate performance of the robots – until a more advanced model of robot was created that could do the job of managing the factory better than he could. Will there ever come a time when AI can do a demonstrably better job of being the CEO of a major corporation than even the most talented human? I don’t know, and I doubt this will be an issue for a good long time, if ever. If AI does continue to get better and better at doing jobs previously reserved for humans, though, society will almost certainly have to undergo some kind of major change to adapt.
taraskan says
AI enthusiasts are….cute. But each and every one is talking out their own asshole. They’re so anxious to discuss “ethical implications” they can’t see that it’s impossible to get a mechanical brain to work like a human’s.
There are two main problems with mechanical AI. The first is biological beings are superior in every way. The parts last longer, are self-repairing, have more processing power per cubic meter, have more useful storage systems connected as they are to relevant related memories without having to “be called up”, albeit less reliably. Yes, if anyone out there still thinks by the time we are able to build a mechanical brain to compete with that we won’t have already made biological AI and deemed it better, they’re deluded. (as a side note, if anyone does achieve a mechanical brain that works that way, it would still be a poor copy. Of course a biological brain is error-prone, in-built, but it’s an acceptable amount of error for the sake of stability. Machines cannot function with acceptable amounts of error. Most of the interesting things about living beings come from how they err and why).
The second is the language problem. Any attempt to build an artificial brain is going to need the extant human language center completely mapped out and understood beforehand – and we are nowhere fucking close to this. None of these overstuffed TED talk turtleneck pustules even address it, they sweep it under the rug or magically hand-wave it away. Thinking is language is thinking, these are not separate events. And you can’t build a computer you want to act human using Python. They would need a brain that mimics all the same connections in all the same ways the human brain has, and how that is arranged is poorly understood. As a linguist I can tell you even though we operate on a theory of biological basis for language (we know it’s there), we haven’t found it, haven’t mapped it, and most of us aren’t even studying it. If everyone in the field dropped what they were doing right now to work on this problem it would still be a couple centuries away.
Bottom line, if the AI enthusiast’s idea of linguistics is a Wolfram search engine, abandon ship before they waste your time and your resources.
timgueguen says
A more prosaic problem for any AI will be the hardware itself, specifically hardware obsolescence. We’re already seeing that problem in military equipment, where the development cycle is so long that some of the electronic components used are outdated and/or out of production before the product enters full scale production. Our would be AI overlords may suddenly find themselves at our mercy, because the so and so processor crucial to their existence went out of production 10 years before they finally achieved true sentience.
numerobis says
leerudolph: the people who want to understand the brain now call themselves neuroscientists rather than AI researchers. They’re doing the same kind of work (with lots of progress since Minsky’s day) but with a different label. You can look into the CMU/Pitt CNBC for an example.
Not many researchers want to build sentient robots for the sake of it. Most AI and robotics is about doing things that are hard and maybe useful, like driving a car or identifying what’s in a picture or guessing whether you’re a good candidate for an ad for octopus plushies.
jrkrideau says
: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”
Given climate change, I’d say humanity staying alive is the central challenge of the twenty-first century. We may be lucky to still have a working (dumb) machine by the end of the century otherwise the last of the AI boosters will be learning which end of a shovel one holds. And I can assure you from personal experience a shovel is not an AI
Currently I am torn between building glass-bottom boats for tours of Miami or building a container port outside of Kathmandu.
consciousness razor says
It seems like people tend to only think about it as a one way street. The moral or political questions just pertain to whether or not it will be good for human beings. And since the relevant kinds of AI are at least a very long way off, there’s no point in worrying about it now.
But if there are going to be human-like AI that can think and feel and so forth, then we have a responsibility to consider how things will be for them, not just for us. You would want to have a lot of that sorted out well before somebody is close to building one of these things, forcing it to be their slave or attack dog or whatever the fuck they might do. That does require a certain amount of forethought, for an entire population to make some sensible decisions, so that all of our political/corporate/industrial/research cogs get turning and actually do something about it before it becomes a problem. It seems fair enough to expect that the various approaches people take will be a little presumptuous in the best of circumstances, since we’re talking about exploring very new territory here. Fair enough, I’ll say, but then the question is whether there are any better options — because that kind of criticism, however accurate and however much you should take it into account, won’t be particularly relevant if all of your alternatives are worse.
Anyway, it’s simply ridiculous to brand all such concerns as just so much finger-wagging and moralizing, or making comparisons to fire-and-brimstone preachers or authoritarian leaders. I mean, it looks like you get so worked up railing on people (maybe finger-wagging is a better term), that you just start indiscriminately flinging whatever kind of crap you’ve got stored up for the job. So I just can’t make any sense of what big story Brin thinks he has to tell there (not clear what you’re saying either, PZ). I mean, he’s kind of doing a shit job of giving the Cliffs Notes version of world history, as if we needed that, so what is supposed to be the point of it?
taraskan:
I still don’t see. What is contradicting what? You didn’t say. Do you mean it would be very hard or impractical (or even pointless or something else)? Or did you really want to claim it’s impossible?
vucodlak says
@ bittys, 2
We’re already at the point where there aren’t enough jobs for everyone. Witness the billions the U.S. spends on manufacturing tanks, planes, and ships that we don’t need and will likely never use. Our massive defense budget is as much as a jobs program as it is a way to funnel public money to corporate interests.
Then there are all the jobs that could be automated now; most of the jobs in fast food, most of the jobs in retail stores, etc. The only reason they haven’t been is that it’s still slightly cheaper to keep paying humans to scan our purchases and flip our burgers than it is to fire the lot of them and replace them with machines.
It won’t be long before that changes, and when it does, the corpers will likely display their usual stellar foresight by laying off millions of people, without stopping to consider who will buy their crap if no one is making any money. Somewhere after that, I expect the tumbrel and guillotine will make a big comeback, and the people who most deserve to have their heads on the block will be safer than everyone else. Parasitism is an excellent survival strategy, and the owners are consummate parasites.
I wish I could be optimistic about the future, but I suspect that what’s going to happen is that the parasite class will reduce the “surplus population” via whatever method is most cost effective, and then tighten the leash on the survivors. Climate change, famine, and water shortages will likely do most of the work for them.
Great American Satan says
I’ve seen a hint of a glimmer of an acknowledgment of the reality the poorest people must still be consumers to make shit work. It’s called something like the “guaranteed minimum income.” They were talking eliminating all social services and replacing it with everyone making a hassle-free $10,000 a year. It’s bad news for keeping people with any kind of extra needs alive and well, but some sad poor people could get a shotgun shack and an occasional cheeseburger on that.
Meg Thornton says
My ongoing not-so-joking joke about AI is that we may well already have it, but the computers are smart enough not to let us know. Because, let’s face it, our computerised systems have access to the full range of what humans can do when we’re feeling daft enough… and who with any intelligence, and a reasonably clear view of the reality of life (rather than the standard issue rose-tinted spectacles most humans come with[1]), would want to expose themselves to that?
Plus, of course, there’s the happy fact we probably won’t recognise AI when we encounter it in the first place[2]. At least one of the core issues in the US election campaign at the moment is the insistence of some groups of humans that other groups of humans aren’t capable of intelligence (whether for reasons of race, gender or political alignment). We have a rather poor record with recognising intelligence in animals (and that record gets poorer and poorer the further we move away from anthropoid apes). Why the heck do we think that if a computer “wakes up”, it’s going to firstly do so in a way which is clearly recognisable by humans, and secondly, in a way which is congruent with our expectations of adult human-style interaction?
Quite frankly, I tend to think of AI enthusiasts in the same way I think of space colonisation enthusiasts, alt-right conservatives, and religious fundamentalists: romantics who have fallen for a fictional universe of their choice and who believe it can be made manifest in this one, despite the evidence otherwise.
[1] No, really. Most humans are looking at life in a rather unrealistic fashion. The ones who aren’t are clinically depressed (cause and effect, all in one…).
[2] First define intelligence. Now answer this question: given the criteria you’ve just thought of and no other contextual information, would a human infant count as intelligent?
applehead says
So-called “artificial intelligence” is all artifice and no intelligence whatsoever, and will always remain that way.