Technologies for military robots have made significant progress over the last two decades, raising issues about using autonomous robotic soldiers for active engagement in battles. What are the ethical concerns?
Alan Wagner. There are pros and cons. Firstly, since robotic soldiers don’t get emotional, revengeful, or angry they would – in theory – follow the rules of war very closely. This could prevent some of the worst atrocities that have occurred in wartime. In that sense, robots could potentially be more ethical than human soldiers. However, the counterargument is that, currently, robotic systems are generally not capable of distinguishing between civilians and soldiers. As a result, there is a risk that robots would accidentally target civilians. That being said, these two arguments are not mutually exclusive.
The possibility of an accident raises questions around responsibility and liability; this is the core of the ethical debate in its current version. One of our values when it comes to military decision-making is that a human is responsible for a decision.
The possibility of an accident is at the core of the current ethical debate.
But responsibility is an extremely difficult notion when it comes to military robots. If a commander authorises an autonomous system, is the commander still responsible for its course of action? If the system makes mistakes, how long does the authority persist? Over a fixed period? Or only regarding certain actions? These questions need to be considered more strongly, but also codified, to decide what the limitations of these systems are and to determine their boundaries with regard to ethics.
Defining responsibility and authority is a legal point, one that could be dealt with based on a set of rules. But there is also a philosophical problem: is the prospect of flesh and bone soldiers facing bloodless machines acceptable?
It comes back to our values and belief systems. The question is not just about how unfair it would be for a soldier to face some Terminator-like unstoppable machine killer. And if the value of both your military and society is such that only a human can decide to take another human’s life in a military context, then that would preclude the use of autonomous systems for most battles or other military operations.
But debating in such absolute terms is simplifying the ethical question. You might have a value system which favors maximising the safety of your soldiers. In that case, you may require autonomous robots in your military. Values are often in conflict with one another and there might be a trade off. The principal value for most countries is to not lose a war, because consequences are high, not just on the battlefield but for society on the whole. This leads to a difficult challenge: if one country is going to develop autonomous systems that have no ethical values, but give them a strategic advantage, are you required to do so as well in order to not let them have that advantage?
Conversely, there is also the question of legitimacy. If you win a battle thanks to robots, will your adversary accept your victory? Will you be able to really make peace and put an end to the war? This is a key question, though it goes unnoticed in ethical debates over military robots. And unfortunately, we’re walking right into it. Consider the United States use of drone warfare in Iraq. Evidence shows that when soldiers weren’t at risk, the number of drone attacks by the United States went up; suggesting that when people are not at risk it could be easier to foster wars, with more battles. On the other hand, in the recent Armenian-Azerbaijani war, the use of drones may have ended the war end more quickly.
In the past, mechanisation of warfare has made it more costly and bloodier, before a reversal. Could it be the case with robots?
It’s not clear whether robots will make warfare bloodier. They could make it less bloody if the autonomous systems are well developed. Many years from now, autonomous systems could become perfect at targeting, completely avoiding civilian casualties. Thousands of lives would be saved. Therefore, one has to be careful about the very notion of “killer robot.”
Thousands of lives could be saved, so one has to be careful about the very notion of killer robot.
We might not like precision guided missiles, but they are a replacement for carpet bombing. The same happened in civilian industries such as agriculture, where after one century of mass using of fertilisers, we are switching to a precision model. “Surgical strikes,” an expression used in the 1990s, was challenged as just another public relations motto. But the underlying trend, which is quite consistent with our value systems, is that we kept developing technologies that would minimise civilian casualties. The 90s were the beginning of precision warfare, with mostly precision guided missiles. Things have advanced: we have precision reconnaissance and capacities for precision assassination, with long distance guns able to kill just one person in a car.
It is a difficult trade off to know whether we should have these technologies versus do we have the wars that might result without them. The slippery slope argument says it might become a battle of who controls these technologies and the engineers able to develop them. But another argument is that if heads of State and other key decision makers can be targeted, precision warfare can pertain to the same dissuasion logic as nuclear bombs, inviting all sides to show restraint.
Does the prospect of artificial intelligence alter these considerations?
The way artificial intelligence and robotics relate is that the robot is the machine, including the sensors, the actuators, and the physical system. The artificial intelligence (AI) is the brain that makes the machine do things. They are highly connected: the smarter the system, the more capable it is. But AI is a vast field, encompassing everything from computer vision and perception to intelligent decision-making to intelligent movement. All these things could go into robotic systems and be used to make them more capable and less prone to flaws and errors. Here AI is enabling precision, which reinforces the arguments above.
Can AI replace people in this context? Again, the answer is not simple. AI may replace the person for some decisions, non-lethal ones, or all decisions, or just some of the time, but not all of the time. We are back to the legal boundaries and liability issues.
What it really changes is the strategic parameters of the decision. We are talking of kinetic warfare here. When using, for example, drones to lead a charge, you risk much less than when you charge with soldiers. Drones are just manufactured items, easy to replace. You may never lose the momentum if you can get it, which is a game changer strategically. You could imagine a battle where you drop a bunch of robots in to control a bridge and they do it for years. They don’t fade or sleep. They just sit there, and nobody crosses the bridge unauthorised.