Researchers with the Facebook AI Research (FAIR) development project have had to shut down two artificial intelligence programs because they began to develop their own internal language that the researchers couldn’t decipher. On the surface, the invented language looks like gibberish, but when the researchers analyzed it, they realized that the word patterns represented a more efficient lingo that only the two AI "agents" could understand.

The two agents in question were chat bots designed to haggle with each other over a deal, in an attempt to develop the most sophisticated negotiation software available. Pitted against one another in what is referred to as a "generative adversarial network", the programs weren’t initially required to negotiate in ordinary English, so their speech patterns began to diverge. They came up with a new lingo that substituted single words for more complex concepts, much the way human clique groups will develop their own parlance to simplify communication. A sample of a typical conversation would run like:

Bob: "i can can i i everything else . . . . . . . . . . . . . ."

Alice: "balls have zero to me to me to me to me to me to me to me to me to"

Bob: "you i everything else . . . . . . . . . . . . . . . ."

Alice: "balls have a ball to me to me to me to me to me to me to me to me"

This exchange might sound totes cray cray to us, but to the two agents it represented sentences from a coherent conversation. And this is not the first time researchers have seen machines develop their own language, with AI falling into linguistic patterns familiar to humans.

"Agents will drift off understandable language and invent codewords for themselves," explains Georgia Institute of Technology researcher Dhruv Batra. "Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands."

The researcher’s decision to shut down the conversation wasn’t over fears that the two programs were developing their own culture, but rather that as a product, it should be understandable by humans. "Our interest was having bots who could talk to people," explains FAIR researcher Mike Lewis. But Facebook’s researchers also point out that we would have no way of understanding a language that a computer might develop on its own. "It’s important to remember, there aren’t bilingual speakers of AI and human languages,” Batra points out.

Dreamland Video podcast
To watch the FREE video version on YouTube, click here.

Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.