A group of neurons in a test tube learned to play a computer game

(ORDO NEWS) — Australian biologists connected hundreds of thousands of neurons to a computer and used an ingenious reward system to make them learn to act in a coordinated manner.

Such a “proto-brain” easily mastered the Pong computer arcade, in which you need to hit the ball with a virtual racket.

Scientists at Monash University have shown that groups of several hundred thousand nerve cells “in vitro” are able to interact and cooperate to learn and perform a common task.

In experiments, such systems have learned to play the classic computer arcade, as Brett Kagan (Brett Kagan) and his colleagues write in an article published in the journal Neuron.

The authors obtained biological neural networks “in vitro” using rodent and human stem cells. A system of about 800,000 cells was grown on an array of microelectrodes, which ensured the exchange of signals with a computer; the scientists named it DishBrain, “brain in a test tube.”

And the computer game Pong, a simple two-dimensional version of ping-pong that involves hitting a virtual ball with a virtual racket, was used as a test of DishBrain’s ability to adapt and process sensory information efficiently. In other words, learn.

A group of neurons in a test tube learned to play a computer game 2

The key to this was the feedback that neurons received in the form of electrical signals generated by a specially developed SpikeStream program.

It made it possible to encode the movements of a game ball: electrical stimulation in one or another part of the DishBrain indicated the position of the ball in space, and its frequency indicated the distance to it.

The output signal was encoded in a similar way: the localization of neuron activity corresponded to the direction of movement of the racket, and the frequency corresponded to its speed.

DishBrain is much simpler than even the most primitive brain, it does not have a dopamine or other reward system. Therefore, the principle of free energy played such a role , according to which living systems tend to minimize entropy, the uncertainty of their environment.

“An unpredictable stimulus is applied to the cells, and the system as a whole reorganizes its activity in such a way as to better play the game and minimize randomness,” says Brett Kagan. “You could say by hitting the ball and getting a predictable response, she creates a more predictable environment for herself.”

If DishBrain made a mistake in the game, then in response he received chaotic electrical signals lasting several seconds. If the neurons hit the virtual ball, then a short and predictable signal came in response.

And this approach worked: it turned out that in just five minutes the system learned to coordinate the activity of individual cells, adapting and learning to play successfully.

Systems like DishBrain may find wide application in the future. “The potential of this work is really impressive,” said London-based professor Karl Friston, author of the free energy principle.

“In fact, we have a biomimetic sandbox where we can test the effects of drugs and genetic variants, a system built from exactly the same neuronal computing elements that work in the brain of both you and me.”

Online:

Contact us: [email protected]

Our Standards, Terms of Use: Standard Terms And Conditions.