Robots at War: Scholars Debate the Ethical Issues
The dawn of the 21st century has been called the decade of the drone. Unmanned aerial vehicles, remotely operated by pilots in the United States, rain Hellfire missiles on suspected insurgents in South Asia and the Middle East.
Now a small group of scholars is grappling with what some believe could be the next generation of weaponry: lethal autonomous robots. At the center of the debate is Ronald C. Arkin, a Georgia Tech professor who has hypothesized lethal weapons systems that are ethically superior to human soldiers on the battlefield. A professor of robotics and ethics, he has devised algorithms for an “ethical governor” that he says could one day guide an aerial drone or ground robot to either shoot or hold its fire in accordance with internationally agreed-upon rules of war.
But some scholars have dismissed Mr. Arkin’s ethical governor as “vaporware,” arguing that current technology is nowhere near the level of complexity that would be needed for a military robotic system to make life-and-death ethical judgments. Clouding the debate is that any mention of lethal robots floods the minds of ordinary observers with Terminator-like imagery, creating expectations that are unreasonable and counterproductive.
If there is any point of agreement between Mr. Arkin and his critics, it is this: Lethal autonomous systems are already inching their way into the battle space, and the time to discuss them is now. The difference is that while Mr. Arkin wants such conversations to result in a plan for research and governance of these weapons, his most ardent opponents want them banned outright, before they contribute to what one calls “the juggernaut of developing more and more advanced weaponry.”