DeepMind learns his AI thinking humanly

Last year, artificial intelligence AlphaGo for the first time won the world champion in the game. This victory was unprecedented and unexpected, given the high complexity of the Chinese Board games. Although the victory AlphaGo was definitely impressive, the artificial intelligence, since then, and beat other Champions, still considered a “narrow” type of AI that can outperform humans only in a limited field of tasks.

So although we are unlikely to be able to beat the computer of go or chess, without the help of another computer to rely on them in routine tasks, we also can not yet. AI will not make you tea and do not plan for your car.

Contrary to this, AI is often portrayed in science fiction as “General” artificial intelligence. That is artificial intelligence with the same level and diversity as people. Although we already have artificial intelligences of various types, which can do everything from diagnosing illnesses to manage our cars, figure out how to integrate them on a more General level, we have not been able.

Last week researchers at DeepMind have submitted several works, which laid claim to the foundations for General artificial intelligence. Although no conclusions yet, the first results are encouraging: in some areas the AI has already surpassed human abilities.

The subject of both works DeepMind has become a relative argument, a critical cognitive capacity that allows people to make comparisons between different objects or ideas. For example, to compare which object is more or less what is left and what is right. People resort to relative (or relational) reasoning every time you try to solve the problem, but scientists have not yet figured out how to give AI this deceptively simple ability.

Researchers from DeepMind chose two different route. One trained neural network — type architecture, an AI modeled after the human brain — the basis of database from a simple, static 3D objects, called CLEVR. Another neural network was trained to understand how changing the two-dimensional object over time.

In CLEVR neural network was represented by a set of simple projects, such as pyramids, cubes and spheres. Scientists then asked questions in a natural language of artificial intelligence, such as “whether a cube of the same material as the cylinder?”. Amazingly, the neural network was able to correctly assess relational attributes CLEVR in 95.5% of cases, surpassing in this parameter, even a man with his 92,6% accuracy.

In the second test DeepMind researchers have created a neural network Visual Interaction Network (VIN), which is trained to predict the future state of the object in the video, depending on its previous movements. To do this, scientists first fed VIN three consecutive video frame that the network is translated into code. This code was a list of vectors — velocity, or position of the object for each object in the frame. VIN is then fed a sequence of other codes, which in combination allowed us to predict the code for the next frame.

For learning the VIN, the researchers used five different types of physical systems in which 2D objects moving in the background of “natural images” and faced with different forces. For example, in one physical system simulated objects interact with each other in accordance with the law of gravitation of Newton. Another neural network was presented Billiards and forced to predict the future position of the balls. According to scientists, the VIN network has successfully coped with the prediction of the behavior of objects in the video.

This work represents an important step towards General AI, but still a lot of work needs to be done before artificial intelligence will be able to take over the world. And besides, the superhuman performance does not imply superhuman intelligence.

Not yet, anyway.

DeepMind learns his AI thinking humanly
Ilya Hel


Date:

by