Rapporteurs: Erin Hunt and Gerardo Lebron Laboy, Mines Action Canada
I. THE PROBLEM
Lethal Autonomous Weapons (LAWs) refers to future weapons that would select their targets and engage (kill) based on their programming. They will be “autonomous” in the sense that they would not require human intervention to actuate1) (act or operate according to its programming). Being solely algorithmic driven, LAWs will be able to kill without any human interference or oversight.
The following arguments have been offered in support of the development of LAWs:
LAWs technology could offer better military performance and thus enhance mission effectiveness
- LAWs, being a product of robotics, could be faster, stronger, and have better endurance than human soldiers in every perspective, not being subject to fatigue.
- Better environmental awareness; robotic sensors could provide better battlefield observation.
- Higher and longer range precision: Also, given advanced sensor technology, LAWs could have better target precision and a longer range.
- Better responsiveness: LAWs will not be subject to the uncertainty in situational awareness that participants in military operations may go through because of communication problems or sight or vision obstructions (fog of war). Through an interconnected system of multiple sensors and intelligence sources, LAWs could have the capacity to update instantly more information than humans and faster, which would enable better awareness of their surroundings.
- Emotionless advantage: LAWs would not have emotions that cloud their judgement.
- Self sacrificing nature: LAWs would not have a self-preservation tendency and thus could be used in self sacrificing manners if needed and appropriate.
- Because LAWs could be programmed to follow the Laws of Armed Conflict, and given their robotic nature, they would not be subject to human failings, which will permit them to comply more rigorously with International Humanitarian Law (IHL), specifically following in a highly precise matter the principles of distinction, proportionality, and military necessity.
- LAWs will substitute for human soldiers and as consequence reduce own-soldier casualties.
- LAWs’ better target precision capabilities could reduce collateral damage, such as civilian casualties or civilian property damage. LAWs, being a product of robotics, could be faster, stronger, and have better endurance than human soldiers in every perspective, not being subject to fatigue.
The following arguments have been offered against the development of LAWs:
- Delegating the decision to kill to machines crosses a fundamental moral line.
Martens Clause violation
- LAWs could not fulfill the principles of humanity and therefore will be contrary to the dictates of public conscience, thus violating the Martens Clause as stated in the Additional Protocol I of 1977 to the Geneva Conventions.
Laws of Armed Conflict violation
- The complexity of the interrelation of the principles of distinction, proportionality, and military necessity, and their required value judgements, makes the Laws of Armed Conflict unprogrammable. Thus, LAWs would not be able to comply with IHL.
- Because LAWs could be designed with machine learning algorithms, their actuation will be unpredictable and thus commanders would lose control of outcomes.
- LAWs programs could be subject to human bias inserted in the algorithmic design process which would open the possibility of unethical discrimination and inhumane treatment.
- It is uncertain how accountability could be addressed with LAWs because of the number of humans associated with the use or production of these weapons (operators, commanders, programmers, manufacturers, etc.). Neither criminal law nor civil law guarantees adequate accountability for individuals directly or indirectly involved in the use of autonomous weapons systems.
- LAWs will lack the human capacity of acting against orders that seemed unethical or immoral to them and thus could more easily serve totalitarian purposes on the hands of commanders.
Lack of constraints
- LAWs will not be subject to human constraints given by emotions, empathy, and compassion, which work as an important check for humans in the killing of civilians.
- Because LAWs will distance humans from the risks and tragedies of war by enabling remotely driven tactics, they will make the political decision of going to war easier and thus function as a force multiplier, promoting more conflict rather than less. This will lead to a war paradigm shift where remoteness plays the central role.
- The development of LAWs would initiate a global arms race that will lead to increased international instability.
The solution is an international preemptive ban on the development of lethal autonomous weapons adopted by the High Contracting Parties of the UN Convention on Certain Conventional Weapons.This ban would build on other humanitarian disarmament treaties and the preemptive ban of blinding laser weapons by the The Protocol on Blinding Laser Weapons, Protocol IV of the 1980 Convention on Certain Conventional Weapons.
The adoption of this solution depends in its entirety on the willingness of the parties to agree and adopt the ban.As of today, the call for the lethal autonomous weapons ban is being supported by the following 25 states:Algeria, Argentina, Austria, Bolivia, Brazil, Chile, Costa Rica, Colombia, Cuba, Djibouti Ecuador, Egypt, Ghana, Guatemala, Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, and Zimbabwe. China has expressed support on a ban on the use of LAWs, not on their development.
There have been numerous citizen expressions from the technology industry in favor of the ban on the development of LAWs. Over 1,000 experts in robotics and artificial intelligence have signed two letters from the Future of Life Institute supporting the ban (Autonomous Weapons: An Open Letter) from AI & robotics Researchers; Lethal Autonomous Weapons Pledge. Signatories of these letters include Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Skype co-founder Jaan Tallinn, Google DeepMind co-founder Demis Hassabis, and others.
Seventy religious leaders, representatives and faith based organisations have signed an interreligious declaration, initiative of PAX in cooperation with Pax Christi International, calling on states to work towards a global ban on fully autonomous weapons.
More than 20 Nobel Peace Prize Laureates have endorsed a joint statement calling for a ban on weapons that would be able to select and attack targets without meaningful human control.
The United States and Russia have expressed the view that an international ban on lethal autonomous weapons would be premature. Instead, they encourage further analysis of the possible benefits this new technology could offer. The Foreign Office and the Ministry of Defence of the United Kingdom expressed its opposition to the international ban since it states that international humanitarian law already addresses the issue.
LAWs have not been fully developed yet. In fact, much of its proposed technology still does not exist. This positions the international community in an advantageous point where we can actually prevent, as we did with laser blinding weapons, a humanitarian catastrophe and its consequences altogether.
1) In the robot context, “actuate” refers to the acts or operations of a robot caused by its programming.The terminology is rooted in Ryan Calo’s sense-think-act paradigm, introduced by robots:
“The utility here of the so-called sense-think-act paradigm lies in distinguishing robots from other technologies. […] The idea of a robot or robotic system is that the technology combines all three. […] My working assumption is that a system acts upon its environment to the extent it changes that environment directly.A technology does not act, and hence is not a robot by merely providing information in an intelligible format. It must be in some way. A robot in the strongest, fullest sense of the term exists in the world as a corporeal object with the capacity to exert itself physically.[…] [R]obots are best thought of as artificial objects or systems that sense, process, and act upon the world to at least some degree.”
Anderson, Kenneth and Matthew C. Waxman. “Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can” Stanford University, The Hoover Institution (Jean Perkins Task Force on National Security and Law Essay Series) (2013). Online at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2250126.
Arkin, Ronald C. “The Case for Ethical Autonomy in Unmanned Systems” Journal of
Military Ethics (2010) 9.4, 332-341, http://www.cc.gatech.edu/ai/robot-lab/onlinepublications/Arkin_ethical_autonomous_systems_final.pdf.
Arkin, Ronald C.; Patrick Ulam & Alan R. Wagner, Moral Decision- making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception, 100 Proceedings of the IEEE Special Issue on Interaction Dynamics at the Interface of Humans and Smart Machines 571 (2012).
Arkin, Ronald C. “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Technical Report GIT-GVU-07-11 https://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf
Asaro, Peter. “On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making” International Review of the Red Cross 94.886 (June 2012): 694-95, http://www.icrc.org/eng/resources/documents/article/review-2012/irrc-886-asaro.htm.
Bowcott, Owen. UK opposes international ban on developing ‘killer robots’, Activists urge bar on weapons that launch attacks without human intervention as UN discusses future of autonomous weapons. The Guardian, 13 Apr 2015 https://www.theguardian.com/politics/2015/apr/13/uk-opposes-international-ban-on-developing-killer-robots
Carnahan, Burrus M. and Marjorie Robertson (Jul 1996). “The Protocol on “Blinding Laser Weapons”: A New Direction for International Humanitarian Law”. The American Journal of International Law. 90 (3): 484–490
Domingos, Pedro. “A Few Useful Things to Know About Machine Learning”, (2012) 10 Communications of the ACM 78
Human Rights Watch, Losing Humanity: The case against Killer Robots, 19 Nov. 2012 httpS://www.hrw.org/news/2012/11/19/ban-killer-robots-it-s-too-late
Human Rights Watch, Mind the Gap: The Lack of Accountability for Killer Robots, 9 Apr. 2015 https://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots
Human Rights Watch, Making the Case: The Danger of killer Robots and the Need for a Preemptive Ban, 9 Dec. 2016 https://www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-need-preemptive-ban
Human Rights Watch, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots, 21 Aug. 2018 https://www.hrw.org/report/2018/08/21/heed-call/moral-and-legal-imperative-ban-killer-robots
Kerr, Ian and Katie Szilagyi. “Evitable Conflicts, Inevitable Technologies? The Science and
Fiction of Robotic Warfare and IHL” (2013) Law Culture and the Humanities.
Kerr, Ian and Katie Szilagyi. “Asleep at the switch? How killer robots become a force multiplier of military necessity”, in ROBOT LAW 333, ed. Ryan Calo, A. Michael Froomkin, and Ian Kerr, (Edward Elgar Publishing Limited, 2016)
Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977. https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Treaty.xsp?action=openDocument&documentId=D9E6B6264D7723C3C12563CD002D6CE4 ; see also https://ihl-databases.icrc.org/ihl/WebART/470-750045?OpenDocument
Sharkey, Noel. “The evitability of autonomous robot warfare” International Review of the Red Cross 94.886 (June 2012): 787-799. Online at http://www.icrc.org/eng/resources/documents/article/review-2012/irrc-886-sharkey.htm.
Turing, Alan “Computing Machinery and Intelligence”, (1950) http://www.abelard.org/turpap/turpap.htm
Urban, Tim. “The AI Revolution: The Road to Superintelligence”,