One of the crowning events in the field of AI research—the defeat of the world’s top professional Go player by a machine learning program—has been undone by an amateur player using a deceptively simple strategy: a strategy deceptive to AI, that is, but one that would be obvious to virtually any human player that it might be used against. Although this might only seem important to the world of strategic board games, the relatively simple exploit used by the researchers to defeat this powerful program is common across the majority of AI systems that have become available to the public as of late, leaving them open to be exploited by unscrupulous human users that know how to manipulate the AI to their own ends.

During the Deep Challenge Match that took place during March 2016, Google’s AlphaGo machine learning program was pitted against human Go champion Lee Sedol, defeating the Korean-born master in three out of four matches. Lee retired as a professional Go player in 2019, stating that artificial intelligence programs such as AlphaGo and its more sophisticated successors were “an entity that cannot be defeated,” preventing Lee—who earned the nickname “The Strong Stone” in professional Go circles—from becoming the world’s best.

The triumph of AlphaGo over a human in the game of Go was considered groundbreaking in the field of AI: previous AI versus human tournaments would involve chess matches; however, computers like IBM’s Deep Blue would play by simply comparing the state of the pieces on the board against a large database of moves, and then execute the set of maneuvers most likely to win. Go, on the other hand, is a strategic game that requires the player to think creatively when planning their moves, a situation where applying statistically-favored tactics aren’t as effective a tactic for the computer, as it is in a tactically-focused game such as chess.

This upset for Team Human spurred researchers in the AI community to analyze what made these Go-playing programs tick, including a team of researchers from the University of California, Berkeley and the Massachusetts Institute of Technology that tried tackling the issue from a different angle: how would these super-human game-playing programs fare against an amateur player?

It turns out that even the most advanced AI was no match for machine learning researcher Kellin Pelrine; although he is a PhD candidate at Canada’s McGill University and is an accomplished Go player, Pelrine does not play the game at a professional level, and yet he was able to consistently defeat KataGo, an open-source refinement of DeepMind’s AlphaGo Zero, winning 14 out of 15 matches with no assistance from a support program.

In analyzing the various Go-playing programs, a program utilized by the team found a weakness in the AI, one that is common across many AI platforms, including OpenAI’s ChatGPT, that suggested an alternate strategy that Pelrine put to use against his artificial opponent.

Pelrine’s approach was to slowly string together a large loop of stones around a portion of the board to encircle one of the AI’s group of stones, while simultaneously executing other moves closer to the edge of the board to distract the AI from what he was doing, a maneuver the team called the “double-sandwich technique”; once completely surrounded, the encircled group of stones are eliminated. Throughout the maneuver the AI was unaware of what its human opponent was up to, even in the final moves just before Pelrine closed the loop.

The vulnerability that Pelrine’s team exposed is a fundamental flaw in many deep-learning AI systems: the program can only process situations that it has been trained on in the past, and is unable to deal with general concepts in the way a human can. Although KataGo is masterful at executing complex strategies like the ones it was trained against, it has absolutely no understanding of the concepts of “surround” or “group”, fundamental concepts that even the most amateur game player would recognize and perform moves to counter.

“As a human it would be quite easy to spot,” Pelrine remarked of his extremely simple strategy.

“It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines,” according to study co-author Stuart Russell, a computer science professor at the University of California, Berkeley.

This fundamental flaw—the inability to understand what its own output is actually about—is common across the majority of advanced AI systems. For instance, OpenAI’s much-vaunted ChatGPT chatbot program doesn’t know what each of the individual words that it uses to produce bodies of text actually mean: it was never designed to consult a dictionary, nor would it be able to conceptualize what the words mean if it were to do so, since it was never designed for such an understanding.

This means that despite how brilliant these programs appear to be in the fields they were designed to service, they all have vulnerabilities that can be exploited by human users that know the programs’ blind spots and other weaknesses. Perhaps just as bad, the AI itself is vulnerable to producing incorrect data all by itself, such as ChatGPT’s producing misinformation and starting arguments over falsehoods that run counter to the services the chatbot is intended to provide.

Whether it is through deliberate exploitation by malevolent actors or mistakes made of its own accord, these generative AI programs can be used to produce extremely convincing misinformation, or inadvertently give people bad advice when searching for medical information, making it vital that computer researchers find a way to uncover what is going on under the hood of generative AI, and for legislators enact laws to regulate this rapidly-growing technology.Image via www.vpnsrus.com

Image Credits:
News Source:
Dreamland Video podcast
To watch the FREE video version on YouTube, click here.

Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.

1 Comment


Leave a Reply