Artificial intelligence DeepMind learned to come up with pictures

The British company DeepMind in 2014 which became part of Google, is constantly working to improve artificial intelligence. In June 2018, her staff presented a neural network capable of creating a three-dimensional image based on two-dimensional. In October, the developers have gone further — they’ve created a neural network BigGAN to generate images of nature, animals and items that are difficult to distinguish from real photos.

As in other projects to create artificial images, this technology is based on generative-adversarial neural networks. Recall that it consists of two parts: a generator and a discriminator. First, it creates an image, and the second evaluates their similarity with samples of the ideal result.

In this work we wanted to blur the line between the images created in AI and photos from the real world. We found that this is enough for the existing methods of generation.

To teach BigGAN to create pictures of butterflies, dogs and food, used different sets of images. First for training was used ImageNet database, and then a broader set of JFT-300M 300 million images, divided into 18,000 categories.

Training BigGAN took 2 days. It took 128 tensor processors Google designed specifically for machine learning.

In the development of neural networks was also attended by the Professor of Scottish University of Heriot-watt. Details about technology itemized in the article “Training
large-scale generative-adversarial neural networks GAN the synthesis of natural images with high accuracy.”

In September, researchers from the University of Carnegie-Melona with generative-adversarial neural networks has created a system for blending facial expressions of some people on behalf of others.

As such neural networks can be used by mankind? Write their own versions in the comments or in our Telegram chat.

Artificial intelligence DeepMind learned to come up with pictures
Ramis Ganiev


Date:

by