Not everything advanced in Computer Science is AI

IEEE Spectrum has an article Stop Calling Everything AI, Machine-Learning Pioneer Says in which Michael I. Jordan addresses the overuse of artificial intelligence in Computer Science

Artificial-intelligence systems are nowhere near advanced enough to replace humans in many tasks involving reasoning, real-world knowledge, and social interaction. They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley.

He notes that the imitation of human thinking is not the sole goal of machine learning—the engineering field that underlies recent progress in AI—or even the best goal. Instead, machine learning can serve to augment human intelligence, via painstaking analysis of large data sets in much the way that a search engine augments human knowledge by organizing the Web. Machine learning also can provide new services to humans in domains such as health care, commerce, and transportation, by bringing together information found in multiple data sets, finding patterns, and proposing new courses of action.

I think this is a important point. Machine learning is an incredible powerful tool, but it is all too often bunched together with artificial intelligence, rather than considered a tool-set in itself, which is used creating artificial intelligence.

There are many uses of machine learning – I have seen it used for fraud detection, document analysis, and even as a tool for creating faster builds and deploys for developers. In the latter case, machine learning was used to figure out what tests needed to run dependent on a number of factors, including what code was edited, the track record of the developer, and the complexity of code. I have yet to come across artificial intelligence being used in a daily setting.

As Michael I. Jordan points out, there are serious considerations to keep in mind, when dealing with machine learning, without having to try to make it into something more grand

“While the science-fiction discussions about AI and super intelligence are fun, they are a distraction,” he says. “There’s not been enough focus on the real problem, which is building planetary-scale machine learning–based systems that actually work, deliver value to humans, and do not amplify inequities.”

The article links to an interesting article by Michael I. Jordan going more into the subject of artificial intelligence being a way off: Artificial Intelligence—The Revolution Hasn’t Happened Yet

This article also gets into the difference between machine learning and artificial intelligence

Most of what is labeled AI today, particularly in the public sphere, is actually machine learning (ML), a term in use for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions, and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical, back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase ‘data science’ emerged to refer to this phenomenon, reflecting both the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, as well as reflecting the larger social and environmental scope of the resulting systems.This confluence of ideas and technology trends has been rebranded as ‘AI’ over the past few years. This rebranding deserves some scrutiny.

Historically, the phrase “artificial intelligence” was coined in the late 1950s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. I will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially-intelligent entity should seem to be one of us, if not physically then at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory, and control theory already existed, and often took inspiration from human or animal behavior, these fields were arguably focused on low-level signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. AI was meant to focus on something different: the high-level or cognitive capability of humans to reason and to think. Sixty years later, however, high-level reasoning and thought remain elusive. The developments now being called AI arose mostly in the engineering fields associated with low-level pattern recognition and movement control, as well as in the field of statistics, the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses, and decisions.


  1. robert79 says

    I teach a course in machine learning which we call “artificial intelligence”, so I guess I’m partly to blame here.

    Thing is, it sells. A lot of my students end up going on internships, or working, for companies that say “we want an AI to do …”, often “…” doesn’t even involve any data and what the company really wants is someone to solve an operations research problem (often some form of optimization) for them. But since some companies (or at least the managers) have never heard of OR, and think AI is a rabbit you can pull out of hat, they hire folks who say they can do AI, but can also do other things.