(ORDO NEWS) — Scientists can now “decipher” people’s thoughts without even touching their heads, according to The Scientist.
In the past, mind reading methods relied on implanting electrodes deep into people’s brains. The new method, described in a report published Sept. 29 in the bioRxiv preprint database, instead relies on a non-invasive brain scanning technique called functional magnetic resonance imaging (fMRI).
fMRI monitors the flow of oxygenated blood through the brain, and because active brain cells require more energy and oxygen, this information provides an indirect estimate of brain activity.
By its very nature, this scanning method cannot capture brain activity in real time because the electrical signals emitted by brain cells move much faster than blood flows through the brain.
But what is remarkable is that the authors of the study found that they can still use this imperfect proxy to decipher the semantic meaning of people’s thoughts, although they cannot help but produce a literal translation.
“If you had asked any cognitive neuroscientist in the world 20 years ago if this was doable, they would have kicked you out laughing,” senior author Alexander Huth, a neuroscientist at the University of Texas at Austin, told The Scientist .
For the new study, which has not yet been peer-reviewed, the team scanned the brains of one woman and two men in their 20s and 30s. Each participant listened to 16 hours of various podcasts and radio shows over multiple scan sessions.
The team then fed those scans to a computer algorithm they called a “decoder” that compared patterns in audio to patterns in recorded brain activity.
The algorithm can then take the fMRI recording and generate a story based on its content, and that story will match the original plot of the podcast or radio show “pretty good.” ,” Huth told The Scientist .
In other words, the decoder could infer which story each participant heard based on their brain activity.
Nonetheless. , the algorithm did make some mistakes, such as swapping character pronouns and using first and third person. He “knows pretty well what’s going on, but he doesn’t know who’s doing what,” Huth said.
In additional tests, the algorithm was able to quite accurately explain the plot of a silent film that the participants watched in a scanner. . He can even retell the story that the participants imagined while telling in their heads.
In the long term, the research team intends to develop this technology so that it can be used in brain-computer interfaces designed for people who cannot speak or type.
Contact us: [email protected]