The rise of autonomous lethal weapons in the early 2000s has provoked political reactions. Since 2013, states and non-governmental organisations have been engaged in discussions within the framework of the Convention on Certain Conventional Weapons, a United Nations body based in Geneva. A campaign to Stop Killer Robots was also launched by a group of NGOs, including Human Rights Watch International. But now in 2021, it has become clear that these efforts have not been particularly successful. The new arms race that has already begun makes an outright ban very unlikely; the question has therefore shifted to international regulation.
The arguments for a ban
The first argument put forward by the Stop Killer Robots campaign is dehumanisation: machines do not see us as people, but as lines of code. The second argument is algorithmic bias: facial recognition used in some automatic weapons systems favours pale faces with strong features, reproducing institutional discrimination against women and people of colour. The third argument is the difference between a human decision and a computer decision: machines do not understand the complexity of a context and the consequences of their actions can undermine the legal and social order.
Machines do not understand context or the consequences of their actions… humans must remain in control.
Therefore, humans must remain in control. There are other elements that support this, such as the issue of legal liability, or the lowering of the threshold for triggering a conflict. A drone war is a bit like a video game, potentially allowing the de-responsibilisation of parties at war. The last argument is the arms race. This race has begun, and it is precisely this race that explains the failure of the campaign to ban autonomous weapons systems.
The failure of state-to-state talks
Alongside the campaign led by NGO activists, several states have pushed for severe limitation, too – around 30 have come out in favour of a complete ban. The UN Secretary General has spoken out repeatedly on the subject, in strong terms1: “machines that have the power and discretion to kill without human intervention are politically unacceptable, morally repugnant and should be banned under international law”. But the rapid spread of these weapons systems (produced by a growing number of countries, some of which trade in them) has sidelined these discussions.
One reason for this failure is that countries advocating an outright ban have little weight in the international arena, while the main producing and using countries are heavyweights: the US, China and Russia are permanent members of the Security Council. The prospect of a treaty was formally rejected in 2019. The US and Russia were the most hard-core objectors. China, while less vocal, is on the same side. The UK and France, the other two permanent members of the Council, have long leaned towards a ban but have nonetheless taken the industrial route of manufacturing these weapons systems.
As we have seen in the nuclear field, the political culture favoured by these powers is to limit access to these advancements to an exclusive ‘club’ of countries rather than to allow their own progress to be hindered. Yet the technologies involved in autonomous lethal weapons are largely developed in the civilian world and will become increasingly widespread. In these conditions, the major states are counting on their technological lead to avoid being caught off guard.
The attempt to ban has therefore turned into an effort to regulate. The questions asked become more technical: a precise definition of autonomy, of legal responsibility. The discourse of Human Rights Watch International, for example, has shifted: while continuing to argue for a ban, the NGO demands “the maintenance of meaningful human control over weapons systems and the use of force”. Regulation would be based on the principles of international law: the obligation to distinguish between civilians and combatants, the proportionality of means and ends, and the military necessity of the use of force. But converting these sometimes-abstract principles into technical solutions is not easy; hence the idea that some form of human control is now central to the discussions. As the technological trend is towards increasing autonomy with a greater role for AI, it is between these two poles (human control and AI decision) that the future of autonomous weapons lies.