US, WASHINGTON (ORDO NEWS) — A team of researchers from the University of California at San Francisco published an article in the journal Nature Neuroscience that addresses the use of AI systems to decipher a person’s verbal thoughts.
Deciphering a person’s thoughts is a complex problem both ethically and technically. In addition to the possibility of criminals and terrorists getting into the mind, as well as potential total control over citizens – one of the classic anti-utopian scenarios, which we may now be close to realizing, reading thoughts can help people with speech and motor impairments.
The recognition that scientists are working on today is most often based on reading the electrical activity of the brain during stimulus processing. Information taken using EEG or related methods is processed by neural networks, and subsequently in theory they should recognize what a person is thinking about.
There are several key problems, such as the limited incentives that the neural network can systematize and then catch, the fact that it is not yet possible to make it recognize similar, but not exactly the same incentives presented in the training sample, as well as small guessing accuracy.
EEG is a non-invasive method of taking signals (using a cap placed on the subject’s head), and it is less accurate than an electrocorticogram, when using which electrodes take data directly from the cerebral cortex. The second method is more accurate, but due to invasiveness, the scope is not too wide.
However, in the case of people whose speech is disturbed due to interventions on the brain, injuries or after a stroke, its use is quite possible, and in addition, the results obtained by scientists will help move forward and can be used for further steps.
In a new work, the research team conquered a new pinnacle: their system was able to decipher whole sentences, with high accuracy. A more advanced neural network than the previous ones was tested on four women with epilepsy, each of which was already equipped with electrodes implanted in the brain to monitor the condition.
Researchers used electrodes to measure cortical activity in different parts of the brain, while women read sentences from a sample they were given by scientists aloud. Each sentence was read twice: first for training the neural network, then for testing.
After processing the signals, the first neural network (encoder) created an abstract representation based on them, coding for its “internal use”. The second neural network (decoder) worked with this representation, which translated the input data into separate words.
Using training on several people at once and an additional set of proposals, the authors achieved high results, as well as the belief that they did not fall into the classical error, in which the neural network simply copies the training sample.
Researchers have found that in their system, the probability of error at best is only 3%. But they emphasize that the neural networks worked with a very limited vocabulary consisting of only 30 and 50 sentences, respectively, in two samples – this is much less than the dictionary of an ordinary person who can recognize hundreds of thousands of words. Nevertheless, as scientists note, for a person who is not able to speak, this could become a real miracle and great opportunities.
—
Online:
Our Standards, Terms of Use: Standard Terms And Conditions.
Contact us: [email protected]
The article is written and prepared by our foreign editors from different countries around the world – material edited and published by Ordo News staff in our US newsroom press.