Some machine learning researchers believe that neural networks will become increasingly uncontrollable and possibly spin out of control, threatening humanity with disaster and even extinction. Their colleagues, on the contrary, believe that the danger of modern intelligent systems does not include the extinction of humanity, and a number of scientists are simply alarmists. So which one is right? And how can we understand this? One possible answer is offered by the authors of a new paper from the Forecasting research institute, who asked experts in the field of AI and other existential risks, as well as specialists in successfully forecasting world events, to assess the danger posed by artificial intelligence. We'll tell you what they found out.
AI (artificial intelligence)are computer systems that can perform tasks typically requiring human intelligence, such as speech recognition and decision making.
Contents
- 1 AI will destroy humanity
- 2 Neural networks are a useful tool
- 3 Who is right?
- 4 Global risks
- 5 Disagreements and answers
AI will destroy humanity
We have already talked more than once about how AI systems can pose a serious threat to the future of humanity. Well, science fiction writers simply adore this topic, and films such as «I, Robot» and «Terminator» have long become cult. Many eminent scientists, by the way, are seriously concerned about the potential risks posed by artificial intelligence.
One of those who has openly and consistently expressed his concerns about the threat of AI to humanity is Professor Stephen Hawking, who has repeatedly stated that the development of AI could be “either the worst or the best event in human history.” He warned that if AI were created and endowed with superhuman intelligence, it could certainly get out of control and destroy our civilization.
Professor Nick Bostrom, director of the Future of Humanity Institute at Oxford University, also warns of the possible dangers of AI. In his book «Superintelligence: Paths, dangers, strategies» he describes scenarios in which the creation of super-intelligent AI could lead to catastrophic consequences for humanity. Bostrom emphasizes the need to develop security measures and ethical principles to guide the development of AI.
More on the topic: Could artificial intelligence destroy humanity by 2035?
Another prominent researcher advocating a cautious approach to AI development is Stuart Russell, a professor of computer science at the University of California, Berkeley. Russell expresses concern that AI may become so intelligent thatwe will not be able to predict its actions. He also believes that it is necessary to develop safe and ethically sound methods for creating and controlling AI.
As you can see, concerns about the potential threats of AI have been expressed by many scientists and technology experts, and we have discussed the views of one of the most famous critics of AI, Eliezer Yudkowsky here.
Neural networks are a useful tool
Neural networks are AI systems based on the structure and function of the human brain that can learn and adapt to perform complex tasks.
Among the scientists and experts who believe that artificial intelligence does not pose a serious threat to humanity, several prominent figures can be identified. These experts highlight the potential of AI to improve lives and solve many of the problems humanity faces.
One of them is Rodney Brooks, former director of the Massachusetts Institute of Technology's artificial intelligence laboratory and one of the founders of iRobot. Brooks is known for his skepticism of apocalyptic predictions about the future of AI and argues that fears that AI will destroy humanity are based on exaggerations and a lack of understanding of the current state of technology. In his opinion, we are still far from creating AI that could pose a real threat.
Do you want to always be aware of the latest news in the field of scientific discoveries and high technology? Subscribe to our channel on Telegram – so you definitely won’t miss anything interesting!
Another well-known expert who has a positive view of the development of AI is Andrew Ng, a professor at Stanford University and one of the founders of Coursera. Ng sees AI as a powerful tool for improving people's lives.
Current advances in AI, such as image recognition and natural language processing, are bringing enormous benefits to medicine, education, and other fields. Fears of AI are often overblown, and the focus should be on addressing more pressing issues such as ensuring the ethical use of technology and protecting data, he says.
Yann LeCun, head of artificial intelligence at Facebook and one of the pioneers of deep learning, also expresses optimism about AI. LeCun believes that AI can bring significant benefits to society if it is properly developed and used. He emphasizes the importance of an interdisciplinary approach and collaboration between scientists, engineers and society to create safe and useful AI systems. LeCun also points out the need to be realistic about the capabilities and limitations of current AI technologies.
Another example of a positive approach to AI is Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence. Etzioni argues that AI does not pose an existential threat to humanity. He believes that the current development of AI is aimed at solving specific problems and improving people's lives. In his opinion, it is necessary to focus on the regulation and ethical aspects of the use of AI, and not on fears about the possible destruction of humanity.
Don't miss: Artificial intelligence will destroy humanity in 2075
Who is right?
So, how can we understand which scientist is right and what threat modern intellectual systems pose to the world? Researchers from the Forecasting Research Institute tried to find the answer to this question by asking experts in the field of machine learning and forecasters to assess the danger posed by AI.
The result turned out to be no less controversial than the opinions of scientists presented above – The two groups of experts interviewed disagreed on many points. Moreover, the experts in the study were generally much more nervous than the forecasters and placed much higher bets on disaster.
The authors of the study, published on the institute's official website, wanted to know why the experts disagreed so much and organized what they called “adversarial collaboration”: having both groups of subject specialists spend many hours (an average of 31 hours for experts and 80 hours for forecasters) on reading new materials about AI and discussing these issues with people who hold opposing views, with a moderator.
Read also: Neural networks have learned to lie and do it on purpose
The idea was to see if giving each group more information and better arguments from the other group would make them change their minds. The researchers were also curious to find “key points”: questions that help explain a group's beliefs and information that might change their minds. The authors of the work focused on disagreements over the potential of AI to either destroy humanity or cause economic collapse.
The results, as mentioned above, were mixed – supporters of the idea that AI poses a threat to humanity reduced the probability of a catastrophe before 2100 from 25% to 20%, and optimists raised it from 0.1% to 0.12%. In other words, both groups remained at the same level from which they started.
The report is impressive, however, because it represents a rare attempt to bring together intelligent, well-informed people who disagree. While the study's findings don't resolve the disagreements among the experts, they do shed light on where those disagreements come from in the first place.
You might also be interested in: The “Dark Side” of Chatbots: From Confessions of Love to Talking to the Dead
Global Risks
Of course, there are a number of other AI risks to worry about, many of which we are already facing today. For example, existing AI systems sometimes exhibit worrying biases and are very good at deceiving users, meaning they can indeed be used for harm. But that harm, while certainly serious, pales in comparison to “losing control of the AI and killing everyone.”
So why do machine learning experts disagree so widely? The authors of the scientific paper believe that this is not due to differences in access to information or lack of awareness of different points of view.
If this were the case, then hostile cooperation would which consisted of mass exposure to new information and opposing opinions, would change people’s beliefs even more dramatically, the article says.
Interestingly, much of the disagreement was also not explained by different ideas about what will happen to AI in the next few years. When the researchers paired optimists and pessimists and compared their chances of disaster, the average difference was 22.7%. Most important, however, were differences in views about the long-term future.
Optimists generally believed that creating super-AI would take longer than pessimists believed. Many cited the need for robotics to reach human levels, not just software artificial intelligence, and argued that this would be much more difficult.
You may be interested in: How are neural networks changing the Internet and teaching creativity?
It's one thing to write code and text on a laptop, but quite another to learn, as a machine, to flip a pancake, clean a tile floor, or perform any of the many other physical tasks at which humans are now superior to robots. Ultimately, language models are just models of language, not digital hyper-humanoid Machiavellians working towards their goal.Disagreements and Answers
The most interesting source of disagreement the researchers identified was what they call “fundamental ideological differences.” This is a fancy way of saying that they disagree about who has the burden of proof in a discussion.
Both groups agree that “extraordinary claims require extraordinary evidence,” but they disagree on what kind of claims are extraordinary. Is it extraordinary to believe that artificial intelligence will kill off all of humanity, even though humanity has existed for hundreds of thousands of years, or is it incredible to believe that humanity will continue to survive alongside an artificial intelligence that is smarter than humans?, the researchers conclude.
This is certainly not the most encouraging conclusion to come to from the study. Disagreements caused by specific differences of opinion about what will happen in the next few years are easier to resolve because they are based on how the next few years will play out, rather than on deep, hard-to-change differences in people's beliefs about how things work. world, and about who bears the burden of proof.
Read also: Global risks: what threatens our civilization?
One way or another, we don’t know the future, and artificial intelligence In any case, it will continue to develop. In short, there are still no simple answers to complex questions.