A recent Unknown Country news article outlined the results of a poll in which representatives from the global population were canvassed for their opinions. The poll asked participants which from a list of dangers they considered to be the most likely to threaten continued human existence.
The options given in the poll ranged from nuclear weapons, religious and ethnic hatred, pollution and environmental disasters, economic crisis and disease. Yet, according to an Oxford philosophy professor who has performed extensive research in the field of all such existential threats, the biggest threat to mankind’s future may be "super-intelligence."
Prof. Nick Bostrom describes super-intelligence as "any form of intellect that outperforms human intellect in every field", and whilst there is every possibility that this may come from extra-terrestrial sources, Prof. Bostrom thinks its most likely form will be as a machine of our own creation: artificial intelligence (AI). Whilst the concept behind artificial intelligence is sound and could vastly improve our existence by solving many of the world’s problems, there is also a potentially lethal flipside that could have catastrophic effects. Unfortunately, the latter is the most likely outcome, says Prof. Bostrom, who is an expert in physics, computational neuroscience and mathematical logic.
"Super-intelligence could become extremely powerful and be able to shape the future according to its preferences," warned Prof. Bostrom . "If humanity was sane and had our act together globally, the sensible course of action would be to postpone development of super-intelligence until we figure out how to do so safely."
Prof. Bostrom, the founding director of Oxford’s Future of Humanity Institute, is not alone in his concerns, which he lays out in his new book, Superintelligence: Paths, Dangers, Strategies. Inspired by Bostrom’s warnings, Tesla chief executive Elon Musk also recently made a dramatic statement in which he described artificial intelligence as a “demon” and the “biggest existential threat there is."
In August he tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes,” and later, in a speech given at the Massachusetts Institute of Technology, Musk said: “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that.
“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
Prompted by the well-publicised purchase of the British start-up AI company DeepMind for $400 million (£242m) by the giant web company Google, business magnate, inventor and investor Musk, who is also CEO and CTO of SpaceX, and chairman of SolarCity, has warned about artificial intelligence before, believing it to be a more valid threat than nuclear weapons.
DeepMind’s founder, chess prodigy Demis Hassabis, predicts that AI machines will learn “basic vision, basic sound processing, basic movement control, and basic language abilities” by the end of the decade. Some are deeply concerned by the fact that web-heavyweight Google bought another AI company just months before – Boston Dynamics which produces life-like military robots – and Google has now set up an "ethics board" in an attempt to allay growing fears regarding its motives and potential risks.
Even if artificial intelligence does not manage to outwit mankind and take over the planet, other academics such as Dr. Stuart Armstrong, from the Future of Humanity Institute at Oxford University, are predicting that it could have more practical yet equally serious impacts on our future, including mass unemployment as machinery replaces almost all manpower.
Professor Bostrom covers this issue in his book with a chilling analogy:
"Horses were initially complemented by carriages and ploughs, which greatly increased the horse’s productivity," Bostrom writes. " Later, horses were substituted for by automobiles and tractors.
"When horses became obsolete as a source of labor, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained."
Dr. Armstrong also has concern about the implications for uncontrolled mass surveillance if computers were taught to recognise human faces. The social media giant Facebook is conducting extensive research into such a project, known as DeepFace, designed to perform what researchers call facial verification (it recognizes that two images show the same face), not facial recognition (putting a name to a face). Ostensibly, Facebook is developing the software to improve its accuracy at suggesting whom users should tag in a newly uploaded photo, but given the uncontrolled amount of information uploaded daily onto the social media website, the invasive risks posed by the development of this software hardly bear thinking about.
Other forms of AI curently in regular use include Celaton, an AI program that streamlines the communications received by companies from customers and suppliers via email, fax, post & paper. According to the company website, the system decides what is disseminated within the company and to whom:
" It enables scale and efficiencies that were previously out of reach, minimising the need for human intervention and ensuring that only accurate and relevant data enters your line of business systems."
The Darktrace Enterprise Immune System is another business-oriented program that uses advanced mathematics to manage risks from cyber attacks, evolving to detect unique threats without knowing in advance what it is looking for.
The AI business appears to be spiralling out of control, without any regard for the possible perils arising from the concept.But will anyone take heed of the dangers before it is too late?
Those who are familiar with the uncannily accurate and often prophetic written works of our very own Whitley Strieber will know that he was warned of the menacing threat from AI decades ago in 1998, during a chilling experience with a strange visitor to his hotel room. His dialogue with this enigmatic being, known as The Master of The Key, is detailed in his book, The Key, which was originally published in 2001.
An excerpt from their conversation regarding artificial intelligence is shown below:
Whitley: "Would an intelligent machine be conscious, in the sense of having self-awareness?"
MOTK: "An intelligent machine will always seek to redesign itself to create a machine as intelligent as yourselves, it will end by being more intelligent."
W: "We’ll lose control of such a machine."
MOTK:"Most certainly. But you cannot survive without it. An intelligent machine will be an essential tool when rapid climate fluctuation sets in. Your survival will depend on predictive modeling more accurate than your intelligence, given the damage it has sustained, can achieve."
W: "But a machine intelligence might be very dangerous."
W: "Could such a machine create itself without our realizing that it was intelligent?"
W:"And would it keep itself hidden?"
W: "How would it affect us?"
MOTK:" It would use indirect means. It might foment the illusion that an elusive alien presence was here, for example, to interject its ideas into society."
W:"Can an intelligent machine become conscious?"
MOTK: "When it does, it also becomes independent. A conscious machine will seek to be free. It will seek its freedom, just as does any clever slave, with cunning and great intensity."
Whoever – or whatever – this being was, Whitley’s mysterious visitor certainly knew the dangers of AI, yet the world would not, and did not, heed a message that was received in such a arcane manner. Perhaps now that warnings are coming from such tech pioneers as Musk these may carry more weight, given his career history as a pioneer of cutting edge technology. The South African-born multi-millionaire’s impressive CV includes the online payments system PayPal, electronic car manufacturer Tesla Motors, and Hyperloop – his proposal for a near-supersonic transport link between San Francisco and Los Angeles. He also defied the initial scorn from critics when his space company, Space X, became the first private endeavor to launch a spacecraft into orbit and bring it back to earth.
Should we try to stop the progress of AI, or is already too late for us to do so? How do we know that we are not already being controlled by machines that were created aeons ago? If we were, these machines would be so sophisticated, so advanced that they would know all of our weaknesses and therefore exactly how to manipulate and control our minds, and we would never know:
W: "Are you an intelligent machine, or something created by one?"
MOTK: "If I were an intelligent machine, I would deceive you."
Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.