(ORDO NEWS) — Special training prepared the dogs for long, quiet lying in the tomograph. This made it possible to collect unique data about the activity of their brain in response to visual stimuli and teach AI to recognize it.
In recent years, we have heard more than once about how neural networks learn to ” read the thoughts ” of a person. To do this, use MRI images that reflect the activity of the brain at one time or another. If a person is shown an image of a house, some neural patterns work, if faces – others.
The difference in these patterns is often almost imperceptible, but by collecting enough such data, you can teach the neural network to notice it. This allows you to “read thoughts”, restoring the original image by its “reflection” in the activity of the brain.
But people are one thing, and animals, which are difficult to make to lie motionless for a long time inside the MRI machine, are quite another. Until now, their tomography has to use sedatives and hypnotics, which makes it difficult to collect data on the awake brain and its activity.
To solve this problem, the team of Emory University professor Gregory Berns has been developing their own dog training method for many years, which teaches them to wait calmly while in the CT scanner.
Thanks to this, a few years ago, scientists first obtained MRI images of the brain of non-sedated and non-limited dogs.
And in their new work, scientists went even further and for the first time supplemented such tomography with neural networks for “mind reading”. Experiments have shown interesting differences in how the visual cortex works in humans and dogs.
The experiments were carried out with a pair of pre-trained dogs Daisy and Bubo. While they were in the tomograph, they were shown several 30-minute videos with the corresponding recordings: dogs running, playing with people and chasing birds, passing cars, cats, etc.
The videos were marked up, which made it possible to link brain activity at different points in time with the objects and actions that were shown on the video.
In addition, the scientists obtained MRI data for Daisy and Bubo in different waking states without watching the video. As a control, similar experiments were carried out on human volunteers.
The collected data was used to prepare a machine learning model. As a result, she was able to determine the objects and actions that a person saw from MRI data with an accuracy of up to 99 percent.
But in the case of dogs, the result turned out to be completely different: the neural network learned to correctly recognize actions using MRI in 75-88 percent of cases, but it could not reliably identify objects.
“We humans are very object-oriented,” says Professor Burns. There are ten times more nouns than verbs in English, because we are somewhat obsessed with objects and their names. But dogs seem to be much less concerned with who and what they see, and more focused on the action taking place.
It can be noted that such a conclusion can also be drawn based on the device of vision of these animals. In general, it is much worse than a human, but dogs notice movement better than us.
Contact us: [email protected]