Despite the disturbing warnings detailed in last week’s Unknown Country Weekender, which warned of the potential dangers posed to mankind from Artificial Intelligence (AI), further news of our continued but highly questionable faith in this form of technology has emerged.

In a recent article in the New York Times, a form of particularly chilling AI is described: it appears that man in his wisdom has now devolved the responsibility of whom to kill down to so-called "intelligent" bombs. The article reveals that last year, an Air Force B-1 bomber tested a new missile off the coast of Southern California, but this was a missile with a difference.

Pilots initially launched the missile, but a short time after its release, it severed all contact with its human operators and decided the remainder of its journey – and the outcome of its mission – without any further human intervention. It was left to the missile to determine which of three unmanned ships it should choose to attack.

Weaponry is becoming increasingly high-tech, and robots are already commonplace in battlefield scenarios. These robotic drones are still guided remotely by human instruction, however, and are in theory a good thing for minimising loss of life for the controlling side as their pilots are safely tucked away in safe locations operating the drones via video screens.

But the future of warfare seems to be veering more and more towards AI solutions, and Britain, Israel and Norway are already beginning to utilise missiles and drones that can attack targets without direct human control. Some of these new "smart-weapons" are taking the precision of attacks to new levels and therefore effectively reducing the potential for collateral damage; Paul Scharre, a weapons specialist at the Center for a New American Security, believes that this type of technology is a positive move.

“Better, smarter weapons are good if they reduce civilian casualties or indiscriminate killing,” said Scharre.

The actions of the new AI "loose cannons" are more unpredictable, however. Britain’s new Brimstone missiles have the ability to distinguish between tanks, cars, or buses and can select their target, even communicating with other Brimstones to share targets like packs of predatory animals working in unison.

“An autonomous weapons arms race is already taking place,” said Steve Omohundro, a physicist and artificial intelligence specialist at Self-Aware Systems, a Palo Alto, California, research center. “They can respond faster, more efficiently and less predictably.”

The concept of autonomous weapons is not new; they were first developed in the United States in the 1990s, when an early version of the Tomahawk cruise missile was designed to track down Soviet ships without human control. This weapon was withdrawn after the nuclear arms treaty with Russia, but the technology lives on and is becoming increasingly complex.
The issue is raising so many concerns that a multi-national convention is meeting in Geneva over 13th-14th November to discuss how the implementation of these weapons should be managed.

“Our concern is with how the targets are determined, and more importantly who determines them,” said Peter Asaro, a co-founder and vice chairman of the International Committee on Robot Arms Control, a group of scientists that advocates restrictions on the use of military robots. “Are these human-designated targets? Or are these systems automatically deciding what is a target?”

The Pentagon has issued a directive that ensures that the highest level of authorization is required for the development of autonomous weapons, but this area of technology is growing at such an alarming rate that the directive has almost been rendered obsolete already. Christof Heyns, the United Nations special rapporteur on extrajudicial, summary or arbitrary executions, has called for the development of these weapons to be suspended altogether.

In the directive, Pentagon has made a distinction between totally autonomous weapons that find their own targets without any intervention, and semi-autonomous weapons, whose targets are selected by a human operators; in the use of future weaponry, the ultimate decision over life and death should be controlled, statesthe directive, and must be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

The latest anti-ship missile tested over Southern California appears to fall into a grey area, with the Pentagon arguing that, because there is some human intervention, it is only semi-autonomous, but detailed information regarding its "decision-making" processes is classified.

“It will be operating autonomously when it searches for the enemy fleet,” said Mark Gubrud, a physicist and a member of the International Committee for Robot Arms Control, and an early critic of so-called smart weapons. “This is pretty sophisticated stuff that I would call artificial intelligence outside human control.”

The Center for a New American Security led the working group that wrote the Pentagon directive, and its representative Scharre, said, “It’s valid to ask if this crosses the line.”

Critics worry that as these weapons become more intelligent, they will also become more difficult to manage or defend against. Are these new automated war-heads going to ensure greater accuracy and save lives, or pave the way for terrifying scenarios where wars are started not by governments, but by rogue missiles?
 

Dreamland Video podcast
To watch the FREE video version on YouTube, click here.

Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.