The Singularity: a point in humanity’s future where technological growth has exceeded our ability to control its development, quite probably facilitated through the actions of a runaway artificial intelligence seeking to improve its own capabilities and knowledge. While no one knows when—or even if—this hypothetical event will occur, there is one yardstick that suggests that AI might begin to reach such an extreme within the next seven years, if one of the more challenging human-based tasks being performed by machines is any indication.

The concept of a technological singularity has been around for quite some time; first introduced by John von Neumann in the 1950s, the concept has gained popularity in recent years against the backdrop of the increased interconnectivity of our devices and the race to build more powerful artificial intelligence. But, like most hypothesized future events, trying to predict precisely what the singularity might look like—and especially when such a phenomenon might happen—has proven to be nigh-impossible; what we do know is that high-profile individuals such as Stephen Hawking and Elon Musk have cautioned against the dangers that AI might present.

But an Italian translation company, Translated, proposes that we might be on the verge of developing what is known as an artificial general intelligence (AGI) within seven years, based on their ongoing analysis of how proficient their software has become.

Artificial intelligence has been in use for decades, from the simple facial-recognition software on our mobile phones to strategic, self-teaching board game playing machines; however, all of these technological marvels are only good at one task: ask DeepMind’s Go-playing AlphaZero to navigate a winter stroll through the woods like Boston Dynamics’ Atlas and it’ll have no clue as to what to do; ask Atlas to play chess and all it will likely do is jump on the board. One of the goals of AI development is to make machine learning programs more adaptive in what they can learn—hence the “general” part of AGI—an important step in the development of a conscious machine.

Translated is basing their seven-year prediction on how their own translation software has been evolving over the years, with company CEO Marco Trombetti saying that despite language being something that “is the most natural thing for humans,” the data they’ve collected “clearly shows that machines are not that far from closing the gap.”

Adapting AI to make use of our own languages has been a challenge for software developers, necessitating the use of human editors to proofread the work done by the machines; early in the game, it took longer for the human editors to check the AI’s output, but as development of the programs improved less time was required to check their work.

Translated measured this progress by the average time it took for the human editors to verify each word in a body of work: in 2015, this process took the editors an average of about 3.5 seconds per word, due to the corrections being made taking up extra proofreading time; now, the company’s AI translators have been improved to the point where it only takes the editors a little more than half the time, averaging roughly two seconds per word.

“The change is so small that every single day you don’t perceive it, but when you see progress… across 10 years, that is impressive,” Trombetti remarked. “This is the first time ever that someone in the field of artificial intelligence did a prediction of the speed to singularity.”

Needless to say, the ability to deftly navigate human language doesn’t, in-of-itself, constitute general intelligence, let alone artificial consciousness or sapience, but this metric might offer at least one indicator of how close we are to having a powerful AI that might outthink us all.

Dreamland Video podcast
To watch the FREE video version on YouTube, click here.

Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.

2 Comments

  1. We will eventually need a new term SI

    Synthetic intelligence (SI) is an alternative term for artificial intelligence emphasizing that the intelligence of machines need not be an imitation or in any way artificial; it can be a genuine form of intelligence.John Haugeland proposes an analogy with simulated diamonds and synthetic diamonds—only the synthetic diamond is truly a diamond. Synthetic means that which is produced by synthesis, combining parts to form a whole; colloquially, a human-made version of that which has arisen naturally. A “synthetic intelligence” would therefore be or appear human-made, but not a simulation.

    Essentially a non biological sophont in its own right.

    Many now think this is the natural path of all intelligence

    “I think it very likely – in fact inevitable – that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of the universe,” says Paul Davies, a British-born theoretical physicist, cosmologist, astrobiologist and Director of the Beyond Center for Fundamental Concepts in Science and Co-Director of the Cosmology Initiative at Arizona State University. “If we ever encounter extraterrestrial intelligence, I believe it is overwhelmingly likely to be post-biological in nature.

  2. I worked in the computer business for over 30 years, and it has been and is my belief that the overwhelmingly negative predictions about AI are winds from the dark (“Jungian”) side of human nature and are not valid predictions of machine behavior. I also think this needs to be actively explored and debated; I have yet to see any public figures endorse even a neutral view of this. It needs a Dr. Peterson to take on the Dark Side.

Leave a Reply