(ORDO NEWS) — Scientists have figured out which signals of neuronal activity make it possible to recognize a person’s inner monologue. This data will help create communication devices for people with speech disorders.
Many diseases can make it impossible for a person to speak. However, patients often retain the ability to internal speech, they can pronounce words and sentences to themselves.
Neuroscientists at the University of Geneva have identified brain signals associated with internal speech. In the future, these results can be used to create interfaces that will help people with speech disorders communicate. The article was published in the journal Nature Communications .
Deciphering oral speech based on the activity of neurons is not a very difficult task. The sounds that a person utters can be easily connected with the signals of his brain, observed at a particular moment in time, and on the obtained data, an artificial intelligence algorithm can be trained.
It is not at all so simple with the inner speech. After all, scientists do not have accurate information about the sequence of words and the speed of their internal reproduction. In imaginary speech, the active areas of the brain are few in number, and the signals of neurons are much weaker than in oral speech.
In order to recognize them, the researchers resorted to the method of electrocorticography, in which electrodes recording the electrical activity of neurons are applied directly to the cerebral cortex. This allows for less noisy signals than using electroencephalography (EEG) when the electrodes are placed on the scalp.
Electrodes can be implanted into the human brain only for medical reasons. Therefore, the study involved 13 patients with epilepsy, in whose brain electrodes had already been inserted to localize epileptogenic zones.
During the experiment, each participant spoke words out loud and then repeated them using only their inner voice. At this time, scientists observed the electrical activity of neurons, estimated the frequency of rhythms and determined the active zones.
High frequencies of brain rhythms (80–150 Hz) gave more information when decoding oral speech, but were useless when decoding internal dialogue. On the contrary, low frequencies were better suited for this. In addition, imaginary speech was associated with the beta rhythm (12-18 Hz).
Articulation could be clearly recognized by brain signals during oral speech, scientists observed increased activity in the parts of the brain responsible for the mechanical reproduction of sounds. At the same time, inner speech better encoded the perception and phonetic characteristics of words.
Both types of speech involved a large area of the brain in the left hemisphere associated with language, auditory processing and memory, especially the temporal region, where Wernicke’s zone is responsible for the perception of words and linguistic symbols.
The data obtained will make it possible to create algorithms for decoding the inner speech of a person by the activity of neurons, and in the future, brain-computer interfaces that will help people with speech disorders communicate.
Contact us: email@example.com