The 2010s were a decade that saw huge developments in the field of artificial intelligence, with machines being built with the capacity to teach themselves how to play (and win at) strategy-based board games, take a stroll through the woods, recognize human faces, make medical diagnoses, and even drive a car. But some experts are warning that the field of AI development may be on the verge of stalling, the victim of an over-hyped industry that has promised more than it might be able to deliver in the short term.
Through the decades, the development of artificial intelligence has seen peaks and troughs—within the field, periods of rapid advancement are referred to as “AI summers”, while slowdowns are called “AI winters”—and the 2010s would appear to have borne some of the biggest advancements in machine intelligence history. But the industry appears to be on the eve of a slowdown in development, at least compared to the leaps and bounds made in over last few years.
Sometimes referred to as one of the “godfathers of AI”, computer scientist Yoshua Bengio says that AI’s capabilities were somewhat overhyped by the companies that developed them, in the interest of promoting their developments, and that this hype might be starting to cool off.
“By the end of the decade there was a growing realization that current techniques can only carry us so far,” observes Gary Marcus, an AI researcher at New York University.
Leaders in the field are hesitant to label the pending slowdown as a full-on AI winter, but prefer to refer to it as an “AI autumn”; given the billions of dollars being put into research, they expect the pace of the development of machine intelligence to plateau, rather than outright stall.
But what is behind the cooling of the season for AI? The decade’s lineup of intelligent machines is commonly referred to as “AI”—artificial intelligence—but AI itself is still stereotypically machine-like: it can do what it was designed to do really well, but that’s all that that one program (or device) can do. The holy grail of AI is what’s known as “AGI”, or “artificial general intelligence”, and as the name implies, it would be an intelligence that can grow beyond the narrow specialization that it was designed for, in a proper mimicry, if not an outright emulation, of human-like intelligence.
For example, take the capabilities of DeepMind’s AlphaGo board game-playing AI program: in 2016, AlphaZero bested human Go champion Lee Sedol at his own game, in 4 out of 5 rounds. After demonstrating the capability to creatively design its own strategies to beat its human opponent at the ancient Chinese game, AlphaGo’s successor, AlphaZero, went on to teach itself how to play not only Go, but also chess and shogi.
But AlphaGo and AlphaZero would be useless if tasked with other comparatively simple tasks like taking a stroll through the woods, as Boston Dynamics’ Atlas has become adept at; conversely, as good at basic parkour as Atlas is, the robot wouldn’t be even remotely capable of playing any of the games that its more strategic counterparts—programs that require a supercomputer to run—now routinely master. Each system was designed to do one thing well, but only that one thing.
AGI, on the other hand, would conceivably be able to learn how to walk down the street to a nearby chess tournament, participate in a game, read the facial expressions of its opponent, and carry on a light conversation. While each of these capabilities are within the realm of today’s AI, each task is typically performed by a single program, one that almost always requires significant computer resources to accomplish its task. And it is that steep computational requirement that is preventing AI from becoming truly intelligent in the general context that it needs if it is to make any meaningful progress.
“The field has come a very long way in the past decade, but we are very much aware that we still have far to go in scientific and technological advances to make machines truly intelligent,” according to Facebook AI Research group researcher Edward Grefenstette.
“One of the biggest challenges is to develop methods that are much more efficient in terms of the data and compute power required to learn to solve a problem well,” Grefenstette continues. “In the past decade, we’ve seen impressive advances made by increasing the scale of data and computation available, but that’s not appropriate or scalable for every problem.”
“If we want to scale to more complex behaviour, we need to do better with less data, and we need to generalise more.”
Perhaps, in addition to the proposed efficiencies proposed by Grefenstette, an entirely new approach to computer architecture, like quantum computers or neuromorphic circuits, could provide the innovative boost the AI field needs to draw itself out of the impending stall threatening the pace of progress?
But what can we expect to see from AI development if this AI autumn does come to pass? “In the next decade, I hope we’ll see a more measured, realistic view of AI’s capability, rather than the hype we’ve seen so far,” posits Catherine Breslin, an ex-Amazon AI researcher. Indeed, a lot of the hype seen in the 2010s could be avoided if individual learning programs were to be taken out from under the umbrella of the term “AI”: currently used by companies as a catch-all buzzword for marketing purposes, “artificial intelligence” is a term that is somewhat misleading when used to describe current “intelligent” algorithms.
“The manifold of things which were lumped into the term “AI” will be recognised and discussed separately,” forecasts Samim Winiger, a former AI researcher at Google in Berlin.
“What we called ‘AI’ or ‘machine learning’ during the past 10-20 years, will be seen as just yet another form of ‘computation'”