Computer systems face recognition already worked well enough and even know how to distinguish the emotions. But people convey information not only through individuals but also through a variety of gestures and poses. So scientists from the robotics Institute of Carnegie Mellon decided to create a program that is easy to “read” the language of human gesture and interpreted the information.
In their design, the researchers used high-precision 500 cameras mounted inside a huge dome with a height of 2 floors. Five hundred cameras (provided to the researchers Panoptic Studio) produce a huge amount of information, and even when shooting a single frame for analysis of all visual information requires significant computing resources. The system shall conduct simultaneous evaluation of the expression of the face, position of head, trunk, legs and all the fingers. As told to the project leader Yaser Sheikh
“A man expresses his feelings and emotions by using facial expressions, movements and body posture is not worse than his voice. But computers, until recently, remained “blind” to the language of our body. We managed to “teach” artificial intelligence is what we do almost from birth. Now we plan to improve IN such a way that for the analysis of the system was only one camera. With this we want to improve the system, so she could decipher the body language of an entire group of people interacting with each other”.
Data on a new method of decoding the language of the human body and its source code are now freely available. More detailed information about their development, the researchers plan to present at the conference Computer Vision and Pattern Recognition Conference (CVPR) 2017, which will be held from 21 to 26 July. Well, to familiarise themselves with the new can now with the help of the video below.
Artificial intelligence has learned to understand body language
Vladimir Kuznetsov