The developers at Purdue University created a neural network which is able to analyze fMRI brain scans taken during viewing of the video and then in real-time to determine what people were watching.
The experiment involved three subjects who showed about a thousand videos. During the show, scientists have been able to obtain a huge amount of data and fMRI, which then began to demonstrate the convolutional neural network, trained to map brain activity as subjects of the videos. In the picture fMRI network learned very quickly and correctly determine what they are looking volunteer at a time when the picture was taken.
In addition, the neural network learned to decipher other people’s data, based on the obtained fMRI with other volunteers the information, and the result was equally high as in fMRI data of healthy subjects and those that had vision defects.
Through this study, scientists were able to decipher thoughts, and at the same time and found out what parts of the brain responsible for the recognition of images and videos. The fact that the brain divides the video into separate components. For example, if a person sees moving on the background the wall of the car, one area of the brain recognizes a wall and another car — thus the scientists were able to trace the functioning of the brain when comparing separate blocks of information and reducing it into a single picture.
The neural network taught “to read minds”
Vyacheslav Larionov