Artificial intelligence is not as smart as you and Elon Musk believe it

In March 2016 computer algorithm AlphaGo company DeepMind was able to win over Lee Sedalen, at that time the world’s best player in complex logic. This event became one of those defining moments in the history of the technology industry, which at the time began and the victory of the computer Deep Blue, IBM over world chess champion Garry Kasparov, and the victory of the Watson supercomputer from IBM in the quiz for scholars Jeopardy in 2011.

But still, despite these victories, however impressive they may be, here we are largely talking about training algorithms and brute computing power than actual artificial intelligence. A former Professor of robotics Massachusetts Institute of technology Rodney Brooks, one of the founders of iRobot, Rethink Robotics and later, said that the training algorithm of the game in a complex strategic puzzle that’s not intelligence. At least not such as we represent to humans.

The expert explained that no matter how strong AlphaGo neither showed themselves in the performance of its tasks, in fact, he is not capable of anything else. Moreover, it is configured in such a way that can play go only on the standard box, 19 x 19. In an interview with TechCrunch Brooks told how recently had the opportunity to chat with the team of DeepMind and find out one interesting detail. On the question of what would have happened to change the tournament size boards, and increased it to 29 x 29 cells, the team AlphaGo confessed to him that even a slight change in the playing field would lead to the fact that “we have come to an end”.

“I think people see how well the algorithm copes with a problem, and probably immediately believe that he is able to effectively perform the other. But the fact that he can’t,” commented Brooks.

Rough intelligence

In may of this year in an interview with Devin Coldewey at the event TechCrunch Disrupt Kasparov noted that the development of a computer that can play chess at the global level is one thing, but quite another to call such a computer artificial intelligence, as it is not. It’s just a machine that throws all their computing power on the problem with which she used to cope best.

“In chess, the machines are winning because of the possibility of deep computing calculation. They can become completely invincible when there is a huge database, very fast hardware and a more logical algorithms. However, they lack understanding. They don’t recognize strategic patterns. The machines have no purpose,” — said Kasparov.

Gil Pratt, CEO of Toyota Institute, a division of Toyota, working on issues and projects related to artificial intelligence and its use in home robots and unmanned vehicles, also took part in an interview with TechCrunch at the event Robotics Session. According to him, the fear that we hear from a wide range of people, including Elon musk recently called artificial intelligence an “existential threat to humanity” may be due to nothing more than those antiutopiya descriptions of the world that offers us a science fiction.

“Our current system of deep learning are good in fulfilling their tasks only to the extent that we have created. But in fact they are quite specialised and tiny in scale. So I think it’s important every once in the context of the topic to mention that how good they are and how they are actually ineffective. And how far we are from the time when these systems will be able to begin to imagine the threat, which says Elon Musk and the rest,” commented Pratt.

Brooks, in turn, on TechCrunch Robotics Session noted that among men in General there is a tendency to assume that if the algorithm is able to cope with the task “x”, then he must be as smart as people.

“I think the reason people, including Elon musk, make this error is the following. When we see a person, very well cope with its task, we understand that he has a high competence in this matter. It seems to me that the same model people are trying to apply to machine learning. And this is the main mistake,” says Brooks.

CEO Facebook mark Zuckerberg held on Sunday live stream, which is also criticized comments by Elon musk, calling it “pretty irresponsible”. According to Zuckerberg, the AI will be able to significantly improve our lives. Musk, in turn, decided not to remain silent and answered Him that “limited understanding” about AI. The topic is still not closed, and Musk promised later in more detail to respond to attacks from peers in the IT industry.

By the way, Musk is not the only one who thinks that the AI can be a potential threat. Physicist Stephen Hawking and philosopher Nick Bostrom also expressed their concern about the potential infiltration of artificial intelligence in the everyday life of mankind. But most likely, they’re talking about more generalized artificial intelligence. About the one that is taught in these labs as Facebook AI Research, DeepMind and Maluuba, rather than on more specialised AI, the first rudiments of which we can see today.

Brooks also notes that many of the critics of the AI don’t even work in this field, and suggested that these people just don’t understand how difficult it is finding solutions for every single task in this area.

“In fact, people who consider AI as an existential threat, not so much. Stephen Hawking, British astrophysicist and astronomer Martin Rees… and a few others. The irony is that most of them share one feature – they don’t even work in the field of artificial intelligence,” said Brooks.

“For those of us who work with AI, it is obvious how difficult it is to get something on the level of the finished product.”

Misconception AI

Part of the problem comes also from the fact that we call it “artificial intelligence”. The truth is that this “intelligence” is not like human intelligence, which in dictionaries and lexical dictionaries are usually described as “the capacity for learning, understanding, and adaptability to new situations”.

Pascal Kaufmann, CEO of Starmind, a start-up, offering assistance to other companies to use collective human intelligence in the search for solutions to the problems in the field of business, for the last 15 years studying neurobiology. The human brain and the computer, said Kaufman, they work quite differently, and it would be an obvious mistake to compare them.

“The analogy is that the brain works like a computer – very dangerous and is an obstacle to the progress of development of AI,” says Kaufman.

The expert also believes that we will not get very far in understanding human intelligence, if we consider it in terms of technology.

“It is a misconception that the algorithms work as the human brain. People just like algorithms, and therefore they think that the brain can be described with their help. I believe that it is fundamentally wrong,” adds Kaufman.

If something goes wrong

There are many examples where AI algorithms are not as smart as we are accustomed to think about them. And one of the most infamous may serve as the AI-algorithm Tay (Tay), created by the development team of AI systems from Microsoft and out of control last year. It took less than one day to turn the bot into a real racist. Experts say that this can happen with any AI system, when it is offered bad examples to follow. In the case of Tay, she came under the influence of racist and other offensive lexical word forms. And since it was programmed to “learn” and “mirror behaviour”, it soon got out of control of researchers.

In the framework of the widespread research specialists at Cornell and Wyoming universities, it was found that very easy to trick the algorithms trained to identify digital images. The experts found that the image looked like a “scrambled nonsense” for the people, by the algorithm was determined as the picture of some everyday object like a school bus.

According to an article in MIT Tech Review and describing this project, it is not clear why the algorithm can be fooled the way it was done by the researchers. What we found out is the fact that people have learned to recognize what is before them is either self-sufficient picture, or some obscure image. Algorithms, in turn, analyzes the pixel, easier manipulation and deception.

As for self-driving cars, here is much more complicated. There are some things that a person understands when preparing to deal with certain situations. The car of this train will be very difficult. In a long article published in one of the car blogs by Rodney Brooks in January of this year, are a few examples of such situations, including the one which describes how the unmanned vehicle is approaching a traffic Stop sign located next to the pedestrian crossing in the city at the beginning which are and communicate with an adult with a child.

The algorithm is likely to be configured to wait for the passage of pedestrians across the road. But what if these pedestrians never to cross the road because they are waiting for, say, a school bus? Driver-the person in this case could give a signal to pedestrians, who in return would wave his hand, indicating that he can pass. Unmanned car in this situation might just be dead in the water, endlessly waiting for people to get across the road, because the algorithm has no understanding of such unique human signals, writes Brooks.

Each of these examples shows us how far we still have to advance in the development of artificial intelligence algorithms. To what extent will succeed, the developers generalized AI is another question. There are things that people easily able to cope, but to train the algorithm for this is torture. Why? Because we humans are not limited in our training set of specific tasks.

Artificial intelligence is not as smart as you and Elon Musk believe it
Nikolai Khizhnyak


Date:

by