(ORDO NEWS) — In the moment between reading the phone number and entering it into the phone, you may find that the numbers have mysteriously gone astray – even if you remember the first digits, the last ones can still blur in an incomprehensible way. Was the 6 before or after the 8? Are you sure?
Holding on to these bits of information long enough to act on them relies on an ability called visual working memory. For years, scientists have debated whether working memory has room for only a few things at once, or just little room for detail: Perhaps our minds are split between a few crystal-clear memories or many more dubious fragments.
Uncertainty in working memory may be related to the surprising way the brain tracks and exploits ambiguity, according to a recent paper by NYU neuroscience researchers in the journal Neuron.
Using machine learning to analyze brain scans of people on a memory task, they found that the cues encode an assessment of what they thought they saw, and that the statistical distribution of noise in the cues encoded memory uncertainty.
The uncertainty of your perceptions may be part of what your brain represents in its memories. And this sense of uncertainty can help the brain make better decisions about how to use its memories.
The results of the study suggest that “the brain uses this noise,” said Clayton Curtis, professor of psychology and neuroscience at New York University and one of the authors of the new work.
This work adds to a growing body of evidence that, even if people are not very good at statistics in their daily lives, the brain regularly interprets its sensory impressions of the world, both current and recalled, in terms of probability. This discovery offers a new way to understand the importance we place on our perception of an uncertain world.
Predictions based on the past
Neurons in the visual system fire in response to certain sights, such as a slanted line, a certain pattern, or even cars or faces, sending a signal to the rest of the nervous system. But on their own, individual neurons are noisy sources of information, so “it’s unlikely that individual neurons are the currency the brain uses to infer what it’s seeing,” Curtis said.
It is more likely that the brain integrates information received from populations of neurons. It is important to understand how he does it.
For example, it can average information from cells: If seeing a 45-degree angle some neurons fire the most and others firing at a 90-degree angle, then the brain can weight and average their signals to represent a 60-degree angle in the eye’s field of vision. Or perhaps the brain adheres to a winner-take-all approach, whereby the neurons that work best are taken as indicators of what is being perceived.
“But there’s a new way of thinking about it, influenced by Bayesian theory,” Curtis said.
Named after its developer, the 18th-century mathematician Thomas Bayes but independently discovered and popularized later by Pierre-Simon Laplace, Bayes theory incorporates uncertainty into its approach to probability.
Bayesian inference considers how certain an outcome can be expected to occur, given what is known about the circumstances. When applied to vision, this approach could mean that the brain perceives neural signals by constructing a probability function: Based on data from previous experience, what are the most likely landmarks that caused this shooting pattern?
Laplace recognized that conditional probabilities are the most accurate way to talk about any observation, and in 1867 the physician and physicist Hermann von Helmholtz connected them to the calculations our brains can make during perception.
However, few neuroscientists paid much attention to these ideas until the 1990s and early 2000s, when researchers began to find that people were making something similar to probabilistic inference in behavioral experiments, and Bayesian methods began to prove useful in some perceptual models. and motor control.
“People have started talking about the brain being Bayesian,” said Wei Ji Ma, professor of neuroscience and psychology at New York University and co-author of the new paper in Neuron.
In a 2004 review, Alexandre Pouget (now a professor of neuroscience at the University of Geneva) and David Knill of the University of Rochester argued for the “Bayesian coding hypothesis”, which states that the brain uses probability distributions to represent sensory information.
At the time, there was almost no evidence for this from research on neurons. But in 2006, Ma, Pouget and their colleagues at the University of Rochester presented compelling evidence that populations of simulated neurons can perform optimal Bayesian inference calculations.
Further work by Ma and others over the past ten years has provided additional confirmation, through electrophysiology and neuroimaging, that the theory is applicable to vision, using machine learning programs called Bayesian decoders to analyze real neural activity.
Neurologists have used decoders to predict what people are looking at from the fMRI (functional magnetic resonance imaging) of their brains. Programs can be trained to find connections between a presented image and the pattern of blood flow and neural activity in the brain that occurs when a person sees this image.
Instead of making a single guess—for example, that the subject is looking at an 85-degree angle—Bayesian decoders create a probability distribution.
The mean of the distribution is the most likely prediction of what the subject is looking at. The standard deviation, which describes the width of the distribution, is thought to reflect the subject’s uncertainty about what he sees (85 degrees or maybe 84 or 86?).
In a recent study, Curtis, Ma and colleagues applied this idea to working memory. First, to test whether a Bayesian decoder could track people’s memories rather than their perceptions, they had subjects in an fMRI machine look at the center of a circle with a dot around its perimeter.
After the dot disappeared, the volunteers were asked to shift their gaze to where they remembered the dot to be.
The researchers provided the fMRI decoder with images of 10 brain regions involved in vision and working memory taken during a memory task.
The team looked at whether the averages of the distributions of neural activity matched the memory data – where the subjects thought the dot was – or whether they reflected where the dot actually was. In six domains, the averages did match memories more closely, allowing for a second experiment.
The Bayesian coding hypothesis suggests that the width of the distributions in at least some of these brain regions should reflect people’s confidence in what they remember. “If it’s very flat, and you’re just as likely to hit either the extremes or the middle, then your memory should be more fuzzy,” Curtis said.
To gauge people’s uncertainty, the researchers asked them to wager that they could remember the location of a dot. The subjects were motivated to be accurate and accurate – they scored more if they guessed a smaller range of locations, and did not score if they missed the real location.
The rates were essentially a self-reported measure of their uncertainty, so researchers could look for correlations between the rates and the standard deviation of the decoder distribution. In two regions of the visual cortex, V3AB and IPS1, the standard deviation of the distribution was consistently associated with the magnitude of uncertainty in humans.
The observed activity patterns could mean that the brain is using the same neural populations that encode the memory of the angle to encode confidence in that memory, rather than storing uncertainty information in a separate part of the brain. “It’s an effective mechanism,” Curtis said. “That’s what’s really great, because it’s co-coded into the same thing.”
However, “one has to understand that the actual correlation is very low,” said Paul Bays, a neuroscientist at the University of Cambridge who also studies visual working memory.
Compared to the visual cortex, fMRI scans are very coarse: Each data point in a scan reflects the activity of thousands, perhaps even millions, of neurons. Given the limitations of the technology, it is remarkable that the researchers were able to make the observations that were made in this study at all.
“We’re using a very noisy measurement to separate a very tiny thing,” said Hsin-Hung Lee, an NYU postdoctoral fellow and first author of the new work.
Future research, he says, could clear up the correlations by inducing a wider range of uncertainty during the task, with some images that test subjects can be fully confident in and others that make them uncertain.
As intriguing as the results are, they can only be a preliminary and partial answer to the question of how uncertainty is encoded.
“This work proves one particular case, which is that uncertainty is encoded in the level of activity [in groups of neurons],” Bays said. “But there’s only so much you can do with fMRI to demonstrate that this is exactly what’s happening.”
Other interpretations are also possible. It is possible that memory and its uncertainty are not stored by the same neurons – neurons of uncertainty may simply be nearby.
Or perhaps something other than the firing of individual neurons is more strongly correlated with uncertainty, but this cannot be determined with current methods. Ideally, different types of evidence—behavioral, computational, and neural—should line up and lead to the same conclusion.
But the idea that we walk around with probability distributions in our heads all the time has a certain beauty. And it’s probably not just vision and working memory that are structured in this way, Pouget said. “This Bayesian theory is extremely general,” he said. “There’s a general computational factor at play here,” whether the brain is making a decision, judging whether you’re hungry, or plotting a route.
However, if the calculation of probabilities is such an integral part of how we perceive and think about the world, why have people gained a reputation for being bad at probabilities?
Well-known studies, primarily in the field of economics and behavioral sciences, have shown that people make a huge number of errors in assessment, which leads to overestimating the likelihood of some dangerous events and underestimating others. “When you ask people to evaluate the probability explicitly and verbally, they fail. There is no other word,” says Pouget.
But such assessment, which can be represented in the form of verbal tasks and diagrams, depends on a cognitive system in the brain that evolved much later than the system used for tasks like the one in this study, Ma says.
Perception, memory, and motor behavior have been honed by a much longer process of natural selection, in which failure to spot a predator or misjudgment of danger meant death. For centuries, the ability to make instantaneous judgments about a remembered perception, perhaps including an estimate of its uncertainty, kept our ancestors alive.
Contact us: [email protected]