Why Banning Killer AI Is Easier Said Than Done

An unmanned US Predator drone flies over Kandahar Air Field, southern Afghanistan, on a moon-lit night. (Image: AP)

As we head deeper into the 21st century, the prospect of getting robots to do the dirty business of killing gets closer with each passing day. In Max Tegmark’s new book, Life 3.0: Being Human in the Age of Artificial Intelligence, the MIT physicist and founder of the Future of Life Institute contemplates this seemingly scifi possibility, weighing the potential benefits of autonomous machines in warfare with the tremendous risks. The ultimate challenge, he says, will be convincing world powers to pass on this game-changing technology.

AI has the potential to transform virtually every aspect of our existence, but it’s not immediately clear if we be able to fully control this awesome power. Radical advances in AI could conceivably result in a utopian paradise, or a techno-hell worthy of a James Cameron movie script. Among Tegmark’s many concerns is the prospect of autonomous killing machines, where humans are kept “out of the loop” when the time comes for a robot to kill an enemy combatant. As with so many things, the devil is in the details, and such a technology could introduce a host of unanticipated complications and risks—some of them of an existential nature.

Gizmodo is excited to share an exclusive excerpt from Life 3.0, in which Tegmark discusses the pros and cons of outsourcing life-and-death decision making to a machine, a recent initiative to institute an international ban on autonomous killing machines, and why it’ll be so difficult for the United States to relinquish this prospective technology.

From Chapter 3: The Near Future: Breakthroughs, Bugs, Laws, Weapons and Jobs

Since time immemorial, humanity has suffered from famine, disease and war. In the future, AI may help reduce famine and disease, but how about war?

Some argue that nuclear weapons deter war between the countries that own them because they’re so horrifying, so how about letting all nations build even more horrifying AI-based weapons in the hope of ending all war forever? If you’re unpersuaded by that argument and believe that future wars are inevitable, how about using AI to make these wars more humane? If wars consist merely of machines fighting machines, then no human soldiers or civilians need get killed. Moreover, future AI-powered drones and other autonomous weapon systems (AWS; also known by their opponents as “killer robots”) can hopefully be made more fair and rational than human soldiers: equipped with superhuman sensors and unafraid of getting killed, they might remain cool, calculating and level-headed even in the heat of battle, and be less likely to accidentally kill civilians.


Date:

by