Universal artificial intelligence will be able to make the modern world a more attractive place to live, say researchers. He can heal us from cancer, in General, will improve health care around the world and will free us from routine tasks that we spend most part of their lives. These aspects were the main topic of conversation among engineers, investors, researchers and politicians gathered at the recent joint conference on the development of artificial intelligence at the human level.
But was at the event and those who see artificial intelligence not only benefits but also a potential threat. Some expressed their concerns about increasing unemployment, because with the advent of full-fledged AI people will lose their jobs, freeing them for a more flexible and fatigue resistant robots, endowed with super-intelligence; others hinted about the possibility of the uprising of the machines, if we all let it slide. But where exactly should we draw the line between false alarmism and real care about our future?
The portal Futurism appealed with this question to five experts in the field of artificial intelligence, and tried to figure out what in AI most afraid of his own creators.
Kenneth Stanley, University of Central Florida, senior technical Director and researcher of the laboratory of artificial intelligence Uber:
“I think the most obvious concern is that the AI will be used against the person. And in fact there are many areas where this could happen. We must make every effort to ensure that this bad side in the end did not come out. Very difficult to find the right solution to the question of how to maintain the responsibility taken for actions AI. This issue is multifaceted and requires consideration of not only scientific position. In other words, in the search for its solution will require the participation of the whole society, not just academia”.
About how to develop safe AI:
“Any technology can be used for both good and harm. Artificial intelligence in this respect is another example. People have always struggled with the fact that new technologies should not fall into the wrong hands and not used in nefarious purposes. I believe that the issue of the AI we will be able to cope with this task. Correct placement of accents and the search for balance in the use of this technology. It can warn us from many potential problems. A more specific solution, I probably can not offer. The only thing I would like to say: we must understand and accept responsibility for the impact that AI can have on all society.”
Irakli Beridze, head of the Center for artificial intelligence and robotics at the Interregional research Institute of United Nations crime and justice (UNICRI):
“I think the most dangerous thing around AI to do with the pace of development – how fast it will be created and how quickly we will be able to adapt to it. If this balance is disturbed, we may face problems.”
On terrorism, crime and other sources of risk:
“From my point of view, the main danger may lie in the fact that the AI can make use of criminal networks and large terrorist organizations aimed at destabilizing the world order. Cyber terrorism and drones equipped with missiles and bombs is already a reality. In the future this may be added the robots equipped with AI systems. It can become a serious problem.
Another big risk of mass implementation AI can be associated with the likely loss of jobs of people. If these losses are massive, and the appropriate solution, we will not, it will be a very dangerous problem.”
“But this is only the negative side of this technology. I am convinced that at its core the AI is not a weapon. It is rather a tool. A very powerful tool. And this powerful tool can be used in both good and bad purposes. Our task is to understand and minimize the risks associated with its use, and that it was only used in good order. We need to focus on that to derive maximum positive benefit from the use of this technology”.
John Langford, chief scientific officer of Microsoft Corporation:
“I think the main danger will represent drones. Automated drones can be a real problem. The current level of computational power of Autonomous weapons is not high enough to perform some extraordinary tasks. However, I can quite imagine how in 5-10 years onboard Autonomous weapons will be carried out calculations at the level of supercomputers. Drones used in combat today, but they are still run by the man. After some time, the human operator is not necessary. The machine will be quite effective for self-assigned tasks. That is what worries me”.
Sigelman Hawa, program Manager of mikrosistemy technologies DARPA:
“Any technology can be misused. I think that all depends in whose hands this technology will go. I don’t think there are bad technologies, I think that there are bad people. It all boils down to who has access to these technologies and how he uses them”.
Thomas Mikołów, researcher of the laboratory of AI Facebook:
“If something is attracting interest and investment, there are always people who do not mind to abuse it. What upsets me is the fact that some people are trying to sell the AI and vividly describe what problems the AI will be able to solve. Although in fact no AI is not created yet.
All these startups one-day promise mountains of gold and give examples of allegedly work in AI, although in reality, we only show improved or optimized technology of the present. In most cases, on the improvement or optimization of these technologies, few thought of. Because they are useless. To take at least the same chat-bots, which are issued for artificial intelligence. And now, after tens of thousands of hours to optimizing the execution of one single task, these startups come to us and say that we have achieved something, which could not reach the others. But this is ridiculous.
Frankly, most of the recent alleged technological breakthroughs such organizations, whose names I would not like to call it, nobody was interesting, not because nobody else could have, and just because these technologies do not generate any financial revenue. They are completely useless. It is closer to quackery. Especially in those cases when AI is seen as a tool enabling to optimize the process of solving a single and narrowly focused tasks. It is impossible to scale or under anything else, except for the simplest tasks.
Any who though as-that will begin to criticize this system immediately encounters problems, which would contradict with sweet statements of such companies.”
Do you agree with the opinions of experts? Share your thoughts in our official Telegram chat Hi-News.ru.
Five technological experts shared their fears and concerns about AI
Nikolai Khizhnyak