Today’s machine learning programs—more commonly referred to as artificial intelligence—have a serious handicap: due to the large amounts of computing power required for these modern miracles to function, they’re extremely energy hungry, with the servers and supercomputers that run these programs oftentimes consuming upwards of tens of millions of watts of power just to emulate capabilities that our brains easily perform using only one-millionth of that energy. These inefficiencies have spurred researchers to search for novel ways of computing, including the development of electronic versions of the very neurons that make biological brains so powerful, a field of computer research called neuromorphics; to that end, researchers in Hong Kong are making use of a device called a memristor to impart a more flexible form of combined computer memory and processing to these neuron-like devices.

Originally conceived of in the 1970s, memristors are electronic components that are capable of remembering the amount of electrical charge that had previously passed through the device; additionally, this memory is considered to be non-volatile, meaning that information can be held in the device without the need to be continuously powered, unlike the random access memory (RAM) that the device you’re reading this article on employs.

Along with the potential for a device with a higher memory density that is more flexible than traditional computers, memristors are also capable of performing computational functions within the same unit, offering further efficiencies by eliminating the need to transfer information back and forth between the computer’s memory and its central processing unit (CPU), as is the case in today’s devices.

These are the capabilities that researchers like Li Can, an assistant professor with the University of Hong Kong’s Department of Electrical and Electronic Engineering, plan to harness to build neuron-like circuits referred to as ‘neuromorphics’. Such a device would facilitate a hardware-based AI that, unlike current machine learning models, is capable of learning over the course of its operating life, something current AI models are incapable of, yet biological brains are extremely adept at.

“We can learn from experience, unlike computers and powerful AI,” Li explained. “[A chatbot like] ChatGPT might tell you that it does not know about the latest news because of its knowledge cut-off at a certain time. Humans are also able to reason based on vague information, while computers require clear instructions.”

A hardware-based AI being facilitated by a physical 3-dimensional structure would also be massively more efficient than our current machine learning algorithms: while modern AI uses a 3D structure to mimic biological neural networks, this is accomplished as a virtual construct run on a computer architecture that was never intended to be used for such a task. For example, an average query put to ChatGPT-4 can use up to 10 watts of energy; this energy, consumed in the mere seconds that it takes to generate the algorithm’s response, can run a human brain in its entirety for more than half an hour.

And that example is just for ChatGPT’s everyday construction of sentences for its users; during the training phase of the program’s large language model, it is estimated to have consumed nearly 1.3 billion watts of power, enough to run 120 average US households for a year. However, the memristor-based chips that Li and his team have developed, using existing manufacturing methods, are between 170 and 350 times more efficient than Nvidia’s top-of-the-line Tesla V100 GPU, a graphics processing unit adapted for running the computationally-intensive workload demanded by AI.

With a multi-branching structure and flexibility mimicking biological neurons, a neuromorphic computer built from memristors wouldn’t have to spend the enormous amounts of energy required by traditional computers to accomplish everyday tasks that we take for granted, such as conversing or painting a picture, making such a device orders of magnitude more efficient than those in use today.

“Brain-inspired devices we are working on would result in a technology different from conventional computers,” Li stated: due to the similarity of a neuromorphic computer to a biological brain, such a device would be quite different in its operation from a traditional computer, requiring more of what we would consider “training” than simply feeding it a pre-written program for the machine to perform.

“Each chip is like a newborn baby with DNA that determines its traits. It would be a training model for a chip,” Li elaborated. “But how a baby develops as it grows remains to be seen, and the same goes for a chip.”

While Li’s description might evoke an image of what might amount to an electronic-based life form, Li also points out the more immediate uses of such a compact, efficient computer, such as having today’s digital assistants being run directly on one’s smartphone, instead of having to be facilitated remotely by a supercomputer; or adaptive medical monitoring devices that a patient could wear as a smartwatch or a sub-dermal implant.

“To make a wearable sensor to monitor diseases, for example, the device needs to be highly energy efficient without compromising its functionality—hopefully it could run years on a single charge,” Li said.

Image Credits:
News Source:
Dreamland Video podcast
To watch the FREE video version on YouTube, click here.

Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.

1 Comment

  1. I’m not sure that high energy usage is my highest concern about AI.

Leave a Reply