Learn More : What are the risks of lethal AWS?
Unpredictable Performance
Lethal autonomous weapons have been called “unpredictable by design”. Given the complex and uncertain interactions between machine learning-based algorithms and a dynamic operational context, it is extremely difficult to predict the behavior of these weapons systems in real world settings. This is especially the case when multiple autonomous weapons systems from multiple designers or actors are interacting with one another at scale. Furthermore, in order to have strategic utility in environments where adversarial autonomous systems are present, the weapon would need to be designed with a degree of unpredictability to be effective.
Escalation Risk & Lowered Barriers to Conflict
Given the speed and scale at which autonomous weapons are capable of operating, these weapons systems carry risks of unintentional escalation. A good parallel of how adversarial AI systems can rapidly escalate out of control is the U.S. stock market “flash crash” of 2010. Recent research by RAND has noted that “the speed of autonomous systems did lead to inadvertent escalation in the wargame” and consequently that “widespread AI and autonomous systems could lead to inadvertent escalation and crisis instability.” This concern has been echoed by others. Even in the United States’s quasi-governmental National Security Commission on AI, a country exploring the development on lethal autonomous weapons, has noted that “Unintended escalations may occur for numerous reasons, including when systems fail to perform as intended, because of challenging and untested complexities of interaction between AI-enabled and autonomous systems on the battlefield, and, more generally, as the result of machines or humans misperceiving signals or actions. AI- enabled systems will likely increase the pace and automation of warfare across the board, reducing the time and space available for de-escalatory measures.” Further, combined with the risk of escalation of conflict, the low cost and ubiquity of lethal autonomous weapons would arguably lower the barriers to conflict, weaken diplomacy, and incentivize more wars.
Scalability of Weapons Systems & Proliferation Risk
Artificial intelligence enables tasks to be accomplished at scale and lower cost. The resulting ability to mass produce autonomous weapons cheaply, creates a dynamic that is highly destabilizing to society. When a human is required to make the decision to kill, there is an inherent limit to how many weapons they can adequately supervise, on the order of a single to a few individual weapons. Removing human judgment also removes the limit on the number of weapons systems activated, meaning a single individual could activate hundred, thousands, or millions of weapons systems. This has prompted some to classify certain types of autonomous weapons systems as weapons of mass destruction.
Beyond risk of mass casualties, small scalable embodiments carry a high risk of proliferation to rogue states and non-state actors, prompting recommendations that this class of weapons should urgently be banned to ensure international security. The risk of proliferation has been identified as a key priority in mitigating the strategic risks of AI in the military.
AI Arms Race
Avoidance of an AI arms race is a foundational guiding principle of ethical artificial intelligence. However, given the absence of a unified global effort to generate political pressure and highlight the risks of the lethal autonomous weapons, an arms race has begun. Furthermore, the few countries developing these systems are accelerating their efforts given the perception that the others are doing the same. Arms race dynamics, which favor speed over safety, further compound the inherent risks of unpredictable and escalatory behavior. The United Nations’ Secretary General has noted that we are now in “unacceptable” territory.
Selective Targeting of Groups
Selecting individuals to kill based on sensor data alone, especially through facial recognition or other biometric information, introduces substantial risks for the selective targeting of groups based on perceived age, gender, race, ethnicity or dress. Combined with the risk of proliferation, development of lethal autonomous weapons systems could greatly increase the risk of targeted violence against classes of indivdiuals, including ethnic cleansing and genocide. This is especially noteworthy given the increased use of facial recognition in policing and ethnic discrimination, with companies citing interest in developing lethal systems as a reason not to take a pledge against the weaponization of facial recognition software. Furthermore, facial recognition software has been shown to amplify bias and have increased error rates in correct identification of individuals from minority backgrounds, such as women and people of color. The potential disproportionate effects of lethal autonomous weapons through the lens of race and gender are key focus areas of civil society advocacy.
Legal & Ethical Issues
Why are lethal autonomous weapons legally & morally wrong?
Morally Abhorrent
Algorithms are incapable of understanding or conceptualizing the value of a human life. Lethal autonomous weapons represent a clear moral red line, by allowing algorithms to decide who dies, and to enact that lethal harm. The UN Roadmap for Digital Cooperation has noted explicitly that: “Life and death decisions should not be delegated to machines.”
Violate International Humanitarian Law:
Further, it has been argued that lethal autonomous weapons would violate the dictates of public conscience and “undermine the principles of humanity because they would be unable to apply compassion or human judgment to decisions to use force.”
Lack Accountability
Ceding the decision to apply lethal force to algorithms raises substantial questions about who is ultimately responsible and accountable for the use of force. This “accountability gap” is arguably illegal as, “International humanitarian law requires that individuals be held legally responsible for war crimes and grave breaches of the Geneva Conventions. Military commanders or operators could be found guilty if they deployed a fully autonomous weapon with the intent to commit a crime. It would, however, be legally challenging and arguably unfair to hold an operator responsible for the unforeseeable actions of an autonomous robot.”
Lack Explainability
In removing humans from the loop, the ability to explain why a particular decision was made is also removed. The ICRC noted that “explainability” is a fundamental problem for machine learning algorithms that are not transparent in the way they function and provide no explanation for why they produce a given output which is a significant problem for autonomous weapons systems. If a weapon behaves unpredictably, unreliably or erroneously, it is currently impossible to determine why.