How We Can Prepare Now for Catastrophically Dangerous AI—and Why We Can’t Wait

How We Can Prepare Now for Catastrophically Dangerous AI—and Why We Can't Wait

Illustration: Elena Scotti (Gizmodo) George Dvorsky51 minutes agoFiled to: Artificial superintelligenceFiled to: Artificial superintelligence

  • Artificial superintelligence
  • artificial intelligence
  • AI
  • futurism
  • science
  • Apocalypse

7Save

  • Edit
  • Embiggen
  • Send to Editors
  • Promote
  • Share to Kinja
  • Move ad slots
  • Change in-article video

Artificial intelligence in its current form is mostly harmless, but that’s not going to last. Machines are getting smarter and more capable by the minute, leading to concerns that AI will eventually match, and then exceed, human levels of intelligence—a prospect known as artificial superintelligence (ASI).

As a technological prospect, ASI will be unlike anything we’ve ever encountered before. We have no prior experience to guide us, which means we’re going to have to put our collective heads together and start preparing. It’s not hyperbole to say humanity’s existence is at stake—as hard as that is to hear and admit—and given there’s no consensus on when we can expect to meet our new digital overlords, it would be incumbent upon us to start preparing now for this possibility, and not wait until some arbitrary date in the future.

On the horizon

Indeed, the return of an AI winter, a period in which innovations in AI slow down appreciably, no longer seems likely. Fueled primarily by the powers of machine learning, we’ve entered into a golden era of AI research, and with no apparent end in sight. In recent years we’ve witnessed startling progress in areas such as independent learning, foresight, autonomous navigation, computer vision, and video gameplay. Computers can now trade stocks on the order of milliseconds, automated cars are increasingly appearing on our streets, and artificially intelligent assistants have encroached into our homes. The coming years will bear witness to further advancements, including AI that can learn through its own experiences, adapt to novel situations, and comprehend abstractions and analogies.

“By definition, ASI will understand the world and humans better than we understand it, and it’s not obvious at all how we could control something like that.”

We don’t know when ASI will arise or what form it’ll take, but the signs of its impending arrival are starting to appear.

Last year, for example, in a bot-on-bot Go tournament, DeepMind’s AlphaGo Zero (AGZ), defeated the regular AlphaGo by a score of 100 games to zero. Incredibly, AGZ required just three days to train itself from scratch, during which time it acquired literally thousands of years of human Go playing experience. As the DeepMind researchers noted, it’s now “possible to train [machines] to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules.” It was a stark reminder that developments in this field are susceptible to rapid, unpredictable, and dramatic improvements, and that we’ve entered into a new era—the age of superintelligent machines.

“According to several estimates, supercomputers can now—or in the near future—do more elementary operations per second than human brains, so we might already have the necessary hardware to compete with brains,” said Jaan Tallinn, a computer programmer, founding member of Skype, and co-founder of the Centre for the Study of Existential Risk, a research center at the University of Cambridge concerned with human extinction scenarios. “Also, a lot of research effort is now going into ‘meta learning’—that is, developing AIs that would be able to design AIs. Therefore, progress in AI might at some point simply decouple from human speeds and timelines.”

These and other developments will likely lead to the introduction of AGI, otherwise known as artificial general intelligence. Unlike artificial narrow intelligence (ANI), which is super good at solving specialized tasks, like playing Go or recognizing human faces, AGI will exhibit proficiency across multiple domains. This powerful form of AI will be more humanlike in its abilities, adapting to new situations, learning a wide variety of skills, and performing an extensive variety of tasks. Once we achieve AGI, the step to superintelligence will be a short one—especially if AGIs are told to create increasingly better versions of themselves.

“It is difficult to predict technological advancement, but a few factors indicate that AGI and ASI might possible within the next several decades,” said Yolanda Lannquist, AI policy researcher at The Future Society, a non-profit think tank concerned with the impacts of emerging technologies. She points to companies currently working towards AGI development, including Google DeepMind, GoodAI, Araya Inc., Vicarious, SingularityNET, among others, including smaller teams and individuals at universities. At the same time, Lannquist said research on ANI will likely lead to breakthroughs in AGI, with companies such as Facebook, Google, IBM, Microsoft, Amazon, Apple, OpenAI, Tencent, Baidu, and Xiaomi among the major companies currently investing heavily in AI research.

With AI increasingly entering into our lives, we’re starting to see unique problems emerge, particularly in areas of privacy, security, and fairness. AI ethics boards are starting to become commonplace, for example, along with standards projects to ensure safe and ethical machine intelligence. Looking ahead, we’re going to have to deal with such developments as massive technological unemployment, the rise of autonomous killing machines (including weaponized drones), AI-enabled hacking, and other threats.

Beyond human levels of comprehension and control

But these are problems of the present and the near future, and most of us can agree that measures should be taken to mitigate these negative aspects of AI. More controversially, however, is the suggestion that we begin preparing for the advent of artificial superintelligence—that heralded moment when machines surpass human levels of intelligence by several orders of magnitude. What makes ASI particularly dangerous is that it will operate beyond human levels of control and comprehension. Owing to its tremendous reach, speed, and computational proficiency, ASI will be capable of accomplishing virtually anything it’s programmed or decides to do.

Conjuring ASI-inspired doomsday scenarios is actually quite besides the point; we already have it within our means to destroy ourselves.

There are many extreme scenarios that come to mind. Armed with superhuman powers, an ASI could destroy our civilization, either by accident, misintention, or deliberate design. For instance, it could turn our planet into goo after a simple misunderstanding of its goals (the allegorical paperclip scenario is a good example), remove humanity as a troublesome nuisance, or wipe out our civilization and infrastructure as it strives to improve itself even further, a possibility AI theorists refer to as recursive self-improvement. Should humanity embark upon an AI arms race with rival nations, a weaponized ASI could get out of control, either during peacetime or during war. An ASI could intentionally end humanity by destroying our planet’s atmosphere or biosphere with self-replicating nanotechnology. Or it could launch all our nuclear weapons, spark a Terminator-style robopocalypse, or unleash some powers of physics we don’t even know about. Using genetics, cybernetics, nanotechnology, or other means at its disposal, an ASI could reengineer us into blathering, mindless automatons, thinking it was doing us some sort of favor in an attempt to pacify our violent natures. Rival ASIs could wage war against themselves in a battle for resources, scorching the planet in the process.

Clearly, we have no shortage of ideas, but conjuring ASI-inspired doomsday scenarios is actually quite besides the point; we already have it within our means to destroy ourselves, and it won’t be difficult for ASI to come up with its own ways to end us.

The prospect admittedly sounds like science fiction, but a growing number of prominent thinkers and concerned citizens are starting to take this possibility very seriously.

Tallinn said once ASI emerges, it’ll confront us with entirely new types of problems.

“By definition, ASI will understand the world and humans better than we understand it, and it’s not obvious at all how we could control something like that,” said Tallinn. “If you think about it, what happens to chimpanzees is no longer up to them, because we humans control their environment by being more intelligent. We should work on AI alignment [AI that’s friendly to humans and human interests] to avoid a similar fate.”

Katia Grace, editor of the AI Impacts blog and a researcher at the Machine Intelligence Research Institute (MIRI), said ASI will be the first technology to potentially surpass humans on every dimension. “So far, humans have had a monopoly on decision-making, and therefore had control over everything,” she told Gizmodo. “With artificial intelligence, this may end.”

Stuart Russell, a professor of computer science and an expert in artificial intelligence at the University of California, Berkeley, said we should be concerned about the potential for ASI for one simple reason: intelligence is power.

“Evolution and history do not provide good examples of a less powerful class of entities retaining power indefinitely over a more powerful class of entities,” Russell told Gizmodo. “We do not yet know how to control ASI, because, traditionally, control over other entities seems to require the ability to out-think and out-anticipate—and, by definition, we cannot out-think and out-anticipate ASI. We have to take an approach to designing AI systems that somehow sidesteps this basic problem.”

Be prepared

Okay, so we have our work cut out for ourselves. But we shouldn’t despair, or worse, do nothing—there are things we can do in the here-and-now.

Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies and a philosopher at Oxford’s Future of Humanity Institute, said no specific protocols or sweeping changes to society are required today, but he agreed we can do consequential research in this area.

“A few years ago the argument that we should do nothing may have made sense, as we had no real idea on how we should research this and make meaningful progress,” Bostrom told Gizmodo. “But concepts and ideas are now in place, and we can break it down into chunks worthy of research.” He said this could take the form of research papers, think tanks, seminars, and so on. “People are now doing important research work in this area,” said Bostrom, “It’s something we can hammer away at and make incremental progress.”

Russell agreed, saying we should think about ways of developing and designing AI systems that are provably safe and beneficial, regardless of how intelligent the components become. He believes this is possible, provided the components are defined, trained, and connected in the right way.

Indeed, this is something we should doing already. Prior to the advent of AGI and ASI, we’ll have to contend with the threats posed by more basic, narrow AI—the kind that’s already starting to appear in our infrastructure. By solving the problems posed by current AI, we could learn some valuable lessons and set some important precedents that could pave the way to building safe, but even more powerful AI, in the future.

Take the prospect of autonomous vehicles, for example. Once deployed en masse, self-driving cars will be monitored, and to a certain extent controlled, by a central intelligence. This overseeing “brain” will send software updates to its fleet, advise cars on traffic conditions, and serve as the overarching network’s communication hub. But imagine if someone were to hack into this system and send malicious instructions to the fleet. It would be a disaster on an epic scale. Such is the threat of AI.

“Cybersecurity of AI systems is a major hole,” Lannquist told Gizmodo. “Autonomous systems, such as autonomous vehicles, personal assistant robots, AI toys, drones, and even weaponized systems are subject to cyber attacks and hacking, to spy, steal, delete or alter information or data, halt or disrupt service, and even hijacking,” she said. “Meanwhile, there is a talent shortage in cybersecurity among governments and companies.”

Another major risk, according to Lannquist, is bias in the data sets used to train machine learning algorithms. The resulting models aren’t fit for everyone, she said, leading to problems of inclusion, equality, fairness, and even the potential for physical harm.  An autonomous vehicle or surgical robot may not be sufficiently trained on enough images to discern humans of different skin color or sizes, for example. Scale this up to the level of ASI, and the problem becomes exponentially worse.

Commercial face recognition software, for example, has repeatedly been shown to be less accurate on people with darker skin. Meanwhile, a predictive policing algorithm called PredPol was shown to unfairly target certain neighborhoods. And in a truly disturbing case, the COMPAS algorithm, which predicts the likelihood of recidivism to guide sentencing, was found to be racially biased. This is happening today—imagine the havoc and harm an ASI could inflict with greater power, scope, and social reach.

It will also be important for humans to stay within the comprehension loop, meaning we need to maintain an understanding of an AI’s decision making rationale. This is already proving to be difficult as AI keeps encroaching into superhuman realms. This is what’s known as the “black box” problem, and it happens when developers are at a loss to explain the behavior of their creations. Making something safe when we don’t have full understanding of how it works is a precarious proposition at best. Accordingly, efforts will be required to create AIs that are capable of explaining themselves in ways we puny humans can comprehend.

Thankfully, that’s already happening. Last year, for example, DARPA gave $6.5 million to computer scientists at Oregon State University to address this issue. The four-year grant will support the development of new methodologies designed to help researchers make better sense of the digital gobbledygook inside black boxes, most notably by getting AIs to explain to humans why they reached certain decisions, or what their conclusions actually mean.

Article preview thumbnail

Would You Feel Safer If Your Self-Driving Car Could Explain Itself?

With each passing breakthrough in artificial intelligence, we’re asking our machines to make…

Read more Read

We also need to change the corporate culture around the development of AI, particularly in Silicon Valley where the prevailing attitude is to “fail hard and fast or die slow.” This sort of mentality won’t work for strong AI, which will require extreme caution, consideration, and foresight. Cutting corners and releasing poorly thought out systems could end in disaster.

“Through more collaboration, such as AGI researchers pooling their resources to form a consortia, industry-led guidelines and standards, or technical standards and norms, we can hopefully re-engineer this ‘race to the bottom’ in safety standards, and instead have a ‘race to the top,’” said Lannquist. “In a ‘race to the top,’ companies take time to uphold ethical and safety standards. Meanwhile, the competition is beneficial because it can speed up progress towards beneficial innovation, like AI for UN Sustainable Development Goals.” 

At the same time, corporations should consider information sharing, particularly if a research lab has stumbled upon a particularly nasty vulnerability, like an algorithm that can sneak past encryption schemes, spread to domains outside an intended realm, or be easily weaponized.

Changing corporate culture won’t be easy, but it needs to start at the top. To facilitate this, companies should create a new executive position, a Chief Safety Officer (CSO), or something similar, to oversee the development of what could be catastrophically dangerous AI, among other dangerous emerging technologies.

Governments and other public institutions have a role to play as well. Russell said we need well-informed groups, committees, agencies, and other institutions within governments that have access to top-level AI researchers. We also need to develop standards for safe AI system design, he added.

“Governments can incentivize research for AGI safety, through grants, awards, and grand challenges,” added Lannquist. “The private sector or academia can contribute or collaborate on research with AI safety organizations. AI researchers can organize to uphold ethical and safe AI development procedures, and research organizations can set up processes for whistle-blowing.”

Action is also required at the international level. The existential dangers posed by AI are potentially more severe than climate change, yet we still have no equivalent to the International Panel for Climate Change (IPCC). How about an International Panel for Artificial Intelligence? In addition to establishing and enforcing standards and regulations, this panel could serve as a “safe space” for AI developers who believe they’re working on something particularly dangerous. A good rule of thumb would be to stop development, and seek council with this panel. On a similar note, and as some previous developments in biotechnology have shown, some research findings are too dangerous to share with the general public (e.g. “gain-of-function” studies in which viruses are deliberately mutated to infect humans). An international AI panel could decide which technological breakthroughs should stay secret for reasons of international security. Conversely, as per the rationale of the gain-of-function studies, the open sharing of knowledge could result in the development of proactive safety measures. Given the existential nature of ASI, however, it’s tough to imagine the ingredients of our doom being passed around for all to see. This will be a tricky area to navigate.

On a more general level, we need to get more people working on the problem, including mathematicians, logicians, ethicists, economists, social scientists, and philosophers.

A number of groups have already started to address the ASI problem, including Oxford’s Future of Humanity Institute, MIRI, the UC Berkeley Center for Human-Compatible AI, OpenAI, and Google Brain. Other initiatives include the Asilomar Conference, which has already established guidelines for the safe development of AI, and the Open Letter on AI signed by many prominent thinkers, including the late physicist Stephen Hawking, Tesla and SpaceX founder Elon Musk, and others.

Russell said the general public can contribute as well—but they need to educate themselves about the issues.

“Learn about some of the new ideas and read some of the technical papers, not just media articles about Musk—Zuckerberg smackdowns,” he said. “Think about how those ideas apply to your work. For example, if you work on visual classification, what objective is the algorithm optimizing? What is the loss matrix? Are you sure that misclassifying a cat as a dog has the same cost as misclassifying a human as a gorilla? If not, think about how to do classification learning with an uncertain loss matrix.”

Ultimately, Russell said it’s important to avoid a tribalist mindset.

“Don’t imagine that a discussion of risk is ‘anti-AI.’ It’s not. It’s a complement to AI. It’s saying, ‘AI has the potential to impact the world,’” he said. “Just as biology has grown up and physics has grown up and accepted some responsibility for its impact on the world, it’s time for AI to grow up—unless, that is, you really believe that AI will never have any impact and will never work.”

The ASI problem is poised to be the most daunting challenge our species has ever faced, and we very well may fail. But we have to try.

Share This Story


Date:

by