One of the biggest barriers to the implementation of useful artificial intelligence in our culture is the limitations imposed by our computer hardware: modern computer chips have their circuits arranged in a two-dimensional layout, running programs that are meant to mimic our own three-dimensional neurological processes. The 2D setup was, and indeed still is, better suited to the more linear processing that the majority of our computer programs require, but running AI-based programs presents a sizable drop in efficiency — it is apparent that if AI is to grow as a valuable tool, a new form of computer hardware will be required to accommodate it.
Currently, AI programs like IBM’s Watson and Apple’s Siri don’t do their work on the customer’s computer or mobile phone, but rather send their clients’ queries off to central computers to do the actual information processing — mobile devices have a respectable amount of processing power for a consumer-grade computer, but they have nowhere near the capacity to crunch the level of data that is required to answer even the simplest of questions.
But what if the architecture of computer processors were themselves built to mimic human neurons, to better suit our emergent AI? A computer chip concept that was first built in the late 1980s is being revived, called ‘neuromorphics’, an architecture that mimics the operation of human neurons.
Current computer chips have a constant flow of electrical pulses, dictated by the processor’s clock speed, and thus require a constant flow of electricity. Neuromorphic processors, on the other hand, operate on an on-demand concept, receiving "spikes" of electrical energy when a piece of information needs processing.
As an example, IBM’s TrueNorth custom-built neuromorphic CMOS chip contains the equivalent of over 268 million synapses, equating to about five times the processing power of the company’s typical consumer-grade processors, yet it only uses 70 milliwatts of power — 1/10,000th of the power that would be required to run an equivalent processor built on traditional architecture.
Neurologically-based architecture is also simply more efficient at processing non-linear data: biotech company Koniku is using actual biological neurons to build simple computer processors. Instead of using millions of switches, Koniku has built a 64-neuron processor that has been used to control a chemical-sensing drone, and estimates that a 500-neuron device could operate a driverless car.
The development of neuromorphic chips has been slow since their inception, however: unlike traditional chips, neuromorphics had to be built with their software hard-wired into them, due to the difficulty of designing the actual software meant to be run on them. However, a Canadian startup called Applied Brain Research is working with a new computer program compiler called Nengo, which is in turn based on the commonly-used programming language called Python. A compiler program is an intermediary program that takes the code that is made by the programmer, and converts it into the machine language that the computer actually uses.
One of ABR’s projects is called Spaun, short for Semantic Pointer Architecture Unified Network, a program that simulates a human brain, albeit one that is limited to 2.5 million neurons. Spaun is designed to mimic basic human cognition functions, such as copying drawings, image recognition, gambling, list memory, counting, question answering, rapid variable creation, and fluid reasoning. The program was recently installed on a neuromorphic computer that increased its processing speed by 9,000 times, effectively blurring the line between human and machine cognition.
“While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” explains Peter Suma, co-CEO of ABR. In the case of Apple’s Siri, the ap only activates when its user has a query, and thus winds up being of limited assistance to the customer.
But an AI native to a mobile device that has the capability of always being on, could become a bot-Friday, an ever-present personal assistant that could help with tasks that its current counterpart is unable to address. Suma elaborates:
“Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – "Who did I have that conversation about doing the launch for our new product in Tokyo?" or "What was that idea for my wife’s birthday gift that Melissa suggested?” As invasive as this sounds on the surface, such an efficient AI would not require communication with an external server, meaning the user’s day-to-day information could be kept private on the device itself.