Microchips that mimic the human brain could make AI much more energy efficient

(ORDO NEWS) — Artificial intelligence (AI) makes video games more realistic and helps your phone recognize your voice – but power-hungry programs consume a lot of power.

However, the next generation of AI could be 1,000 times more energy efficient thanks to computer chips that work like the human brain. A new study shows that such neuromorphic chips can execute AI algorithms using only a fraction of the power consumed by conventional chips.

“It’s an impressive piece of work,” says Steve Furber, a computer scientist at the University of Manchester. Such advances could lead to a huge leap in the performance of complex software that, say, translates languages ​​or drives driverless cars, he says.

An AI program is usually great at finding certain desired patterns in a dataset, and one of the hardest things it does is storing pieces of the pattern as it puts it all together.

Consider how a computer can recognize an image. It first detects well-defined edges in the image. Then he must memorize these edges and all subsequent parts of the image in order to form the final picture.

A common component of such networks is a software unit called long-term short-term memory (LSTM), which retains memory of one element as the situation changes over time.

For example, a vertical edge in an image must be kept in memory while the software determines whether it represents part of a “4” or a car door. Typical AI systems need to keep track of hundreds of LSTM elements at the same time.

Existing LSTM networks running on conventional computer chips are highly accurate. But chips require a lot of power. To process bits of information, they must first receive individual bits of stored data, manipulate them, and then send them back to memory. And then repeat this sequence over and over again.

Intel, IBM and other chip makers are experimenting with an alternative chip design called neuromorphic chips. They process information like a network of neurons in the brain, where each neuron receives input from others in the network and fires if the total input exceeds a threshold.

The new chips are designed to have the hardware equivalent of networked neurons. Artificial intelligence programs also rely on networks of artificial neurons, but in ordinary computers, these neurons are determined exclusively by software and therefore are virtually located in separate computer memory chips.

In a neuromorphic chip, memory and computing run together, making it much more energy efficient: Our brains use only 20 watts of power, about the same as an energy-saving light bulb. But to use this architecture, computer scientists have to rethink how they perform functions like LSTMs.

It was this task that Wolfgang Maass, a computer scientist at the Graz University of Technology, took on. He and his colleagues tried to replicate the memory storage mechanism in our brains, which in biological neural networks is called post-hyperpolarization (AHP) currents.

After a neuron in the brain fires, it usually returns to its baseline and remains dormant until it receives enough input again to exceed its threshold.

But in AHP networks, after a neuron has fired once, it is temporarily inhibited from firing again – this is a dead period that actually helps the network of neurons store information using less energy.

Maass and colleagues integrated the AHP neuronal firing model into neuromorphic neural network software and ran their network through two standard AI tests. The first task was to recognize the handwritten letter “3” in an image split into hundreds of individual pixels.

In doing so, they found that when running on one of Intel’s Loihi neuromorphic chips, their algorithm was 1,000 times more power efficient than LSTM-based image recognition algorithms running on conventional chips.

In a second test, in which a computer had to answer questions about the meaning of stories up to 20 sentences long, the neuromorphic system was 16 times more efficient than algorithms running on conventional computer processors, the authors report in the journal Nature Machine Intelligence this week.

Maass notes that the second test was conducted on a series of 22 first-generation Loihi chips from Intel, which consume a relatively large amount of power when communicating with each other.

Since then, the company has released a second generation of Loihi chips, each with more neurons, which it says should reduce the need for inter-chip communication and thus make software work more efficiently.

At the moment, only a few neuromorphic chips are commercially available. Therefore, large-scale application is unlikely to appear quickly.

But cutting-edge artificial intelligence algorithms like Maass demonstrated could help these chips gain a commercial footing, says Anton Arkhipov, a computational neuroscientist at the Allen Institute. “At the very least, it will help speed up the work of artificial intelligence systems.”

This, in turn, could lead to new applications such as AI-powered digital assistants that can not only tell someone the name of a person in a photo, but also remember where they met and tell stories from their shared past.

By incorporating other neuromorphic firing patterns in the brain into neuromorphic sets, Massa says, future neuromorphic sets may even one day begin to learn how multiple neuronal firing patterns work together to create consciousness.


Contact us: [email protected]

Our Standards, Terms of Use: Standard Terms And Conditions.