Found a way to turn thoughts into oral speech. To say this is not necessarily

Paralysis is a pretty scary condition in which part if I may say so, “physiological” functions of the person becomes uncontrollable despite the fact that at the level of the Central nervous system may be in order. The treatment of this condition scientists are fighting is not the first and, quite possibly, one of the problems associated with the loss of ability to speak, found. It has recently developed a way to convert brainwaves into speech signals. And this helped artificial intelligence.

According to the editors of the journal Science, researchers from the Netherlands, Germany and the United States, using computational models based on neural networks, reconstructed words and sentences by reading the brain signals. To do this, they watched the areas of the brain in those moments when people read aloud, a speech, or just listened to the recording.

“We are trying to develop a circuit of artificial neurons that turn on and off at different points in time, reproducing the sound. How these signals are translated to speech, individually for each person, therefore, computer algorithms should be able to “understand” how to do it.” — told Nimes, Masharani from Columbia University.

In their work, the experts relied on data obtained from five persons with epilepsy. The network analyzed the behavior of the auditory cortex (which is active both during speech and during listening). The computer then reconstructed the speech data from the pulses received from these people. As a result, the algorithm coped with an accuracy of 75%.

Another team of scientists led by neurobiologist Miguel Agricom of the University of Bremen in Germany and Christian Perfom from Maastricht University in the Netherlands, relied on data from six people who had undergone surgery to remove a brain tumor. The microphone caught their voices when they read aloud single words. At this time, the electrodes recorded information from the speech centers of the brain. Network compared the readings of the electrodes with audio. As a result, about 40% of the data was recognized correctly.

The third team from the University of California in San Francisco have reconstructed entire sentences on the basis of mere brain activity from three patients with epilepsy who have read some sentences aloud. Some sentences were correctly identified in more than 80% of cases.

Despite the very good results, the system is still at a very early stage and needs further improvement. But if you succeed, hundreds of thousands of people around the world will get the opportunity to speak.

But in order to chat in our chat in Telegram neural network is not needed. Join us!


Date:

by