From guided missiles to autonomous lethal weapons
Guided missiles such as the Exocet used since the 1980s have some of the same characteristics as autonomous robots. When they are in automatic mode, they use information about their context to adjust their trajectory. However, the term ‘robot’ is reserved for devices with greater autonomy. This includes machines capable of processing a wider variety of information, with an extended autonomy from a few minutes to several hours, full mobility, and able to make a wider range of decisions (e.g. whether to fire or not).
Greater autonomy of ‘lethal autonomous weapons’ (LAWs) has been made possible by advances in on-board computing – namely miniaturisation of processors, increased precision of sensors – and mobility. There are two types of systems: drones and ground robots.
#1 Drones
Most marine and aerial drones are equipped with an automatic mode. Initially used for observation and reconnaissance missions, and later for laser guidance, they were armed in the early 2000s in Afghanistan and Iraq. The Predator (gradually replaced by the Reaper) is still an isolated precursor, but since the 2010s the use of combat drones has become more and more frequent.
The Turks have had them since 2012 and equipped the Azeris during the war against Armenia in 2020: these models, which are less sophisticated than the American drones, have had a decisive impact. The Russians, Indians, Israelis, South Africans, and Pakistanis are all making their own drones. The Chinese are making them, too (Wing Loong 1 and 2, CaiHong 1 to 6) and, unlike the United States, which only supplies them to its close allies, are selling them to third-party countries.
UAVs have proven their effectiveness in specific domains, but in high-intensity conflict they are not competitive.
Various European projects have been developed, some have reached the prototype stage (EADS’ Barracuda, Bae’s Taranis), others have reached a more advanced stage (Dassault’s Neuron) or have been adopted by the armed forces (Safran’s Patroller). The long-postponed European combat drone project was recently launched and will be operational around 2028.
Drones have proven their effectiveness in specific theatres (combat against terrorists, regional conflicts), but in a high-intensity conflict they are not competitive enough. The development challenges faced are stealth, endurance, quality of sensors and increased use of AI. While MALE (Medium Altitude Long Endurance) drones are several metres long, ultralight models are emerging. In 2021, a team of Chinese researchers unveiled a prototype amphibious drone weighing only 1.5 kg.
The military are now worried about a new threat: small civilian drones, equipped with rudimentary weapons (explosives), operating in swarms.
#2 Ground robots
Used mainly for defensive missions (surveillance, site protection) or transport, land robots are less widely used. Mobility over rough terrain poses technical problems requiring, in the case of ‘legged’ robots such as those of Boston Dynamics, technical prowess, this also affects the robotic ‘mule’ used by the American army.
Less spectacular but more and more widely used, heavy unmanned ground vehicles (UGVs) mounted on tracks are used for transport tasks but can also be used to support drone systems. Similar to models used in the civilian sector and less expensive than drones, they are developed by different manufacturers, such as Estonian Milrem Robotics, whose THeMIS was deployed in 2019 in the Barkhane mission in Mali. The Russian army is one of the few to have armed these vehicles, with the Uran‑9 reportedly being tested in Syria.
Debates
There is much debate surrounding the emergence of combat drones. The term “Killer Robot” has been pushed by activists who opposed their use; particularly around public fear that these technologies will be used by nefarious forces to dominate a battlefield or even entire populations.
Another fear relates to the role of AI. In July 2015, an open letter on autonomous weapons signed by robotics and AI researchers, but also by astrophysicist Stephen Hawkins and entrepreneurs Elon Musk and Steve Wozniak, expressed concern that “we may one day lose control of AI systems through the rise of super intelligence that does not act in accordance with humanity’s desires”.
Future debate between states may no longer be about the existence of these systems, but rather the rules of engagement.
More specifically, there is a risk of losing of control. In 2020, according to a UN report, a drone in Libya killed its target without a “direct order” 1. This raises technical questions: how to avoid losing control or having systems hacked, and substantive questions: should autonomous military robots be banned? If so, how can the word “autonomous” be defined precisely? If not, how do we allocate responsibility for misuse or malfunction?
It can be argued, however, that the technology could potentially save lives by avoiding civilian casualties, or by ending wars more quickly. Christof Heyns, who served as UN Special Rapporteur until 2016, argued strongly for a moratorium on the development of these systems. His fear was that states would enter into an arms race, and, with a much lower ‘entry cost’ than for nuclear weapons, rogue states or criminal organisations could equip themselves.
But the race has already begun. Future debates between states may no longer be about whether these systems should exist, but rather about the rules of engagement.