In 2021, a small North Carolina-based drug company called Collaborations Pharmaceuticals was invited to present their thoughts about the potential for harm caused by their research at a biennial arms control conference. Collaborations Pharmaceuticals’ core intellectual property is its drug design software MegaSyn. MegaSyn incorporates machine learning (ML) techniques to identify and repurpose existing drugs to treat rare diseases, afflictions that carry little financial incentive for large drug manufacturers to cure. MegaSyn can also develop new, never-before-synthesized compounds – de novo molecules – likely to be good candidates to treat diseases. As part of its ML programming, MegaSyn is trained to minimize potential harms that might befall users of the drugs it synthesizes. What if, Collaborations researchers wondered while preparing for the conference, they inverted MegaSyn’s logic to instead maximize the potential negative side effects for patients? With new nefarious logic onboard, MegaSyn generated tens of thousands of deadly toxins in the span of several hours. Its outputs ranged from the exceptionally-lethal nerve agent VX to never-before-synthesized chemical weapons deadlier than any previously known. “We can easily erase the thousands of molecules we created,” Collaborations researchers wrote in Nature, “but we cannot delete the knowledge of how to recreate them.” The thought that their benign, benevolent Artificial Intelligence (AI) research could be put to malevolent use was a thought Collaborations “had not considered before.” Today, we are on the precipice of committing a similar failure to imagine the worst-case outcomes of a different AI technology on the threshold of being realized – Lethal Autonomous Weapons Systems (LAWS.)
The Congressional Research Service defines LAWS as a “special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system” (emphasis added.) While defensive autonomous systems like the US Navy’s Phalanx and Russian Navy’s Kortik Close-In Weapons Systems have been deployed for decades, these weapons are primarily designed to counter anti-ship missiles and are generally not considered LAWS because they are not typically lethal. These systems can be enabled to react autonomously to incoming threats faster than would be possible by a human operator, and their use is uncontroversial. More contentiously, the US and other nations are developing and operating remotely piloted aircraft for lethal operations in Afghanistan, Iraq, Pakistan, and Somalia, among other places. Importantly, these drones are under the active control of a human operator during the targeting and killing of targets, though that human is often far removed from the battlefield and frequently does not know why their targets were chosen. Though ethically murky in their own right, drones are also not LAWS because a human is involved in the decision to kill. True LAWS – machines designed to independently select and destroy targets without human intervention – have not yet been deployed. But the CEO of AeroVironment, the maker of the semi-autonomous Switchblade loitering munition widely-operated in Ukraine, says that the technology to enable his product to operate fully-autonomously already exists. Indeed, some writers predict that LAWS’ appearance in the Ukraine War is inevitable before it concludes.
The perceived inevitably of LAWS is propelled by a centuries-long pursuit to maximize the efficacy of the tools designed for killing with a concomitant drive to protect those doing the killing.
From prehistory to the period up to around World War I, humans were involved in direct combat where the maximum distance of engagement never exceeded the distance a cannonball might travel. With the advent of aircraft, combatants could kill at a range that permitted them to ignore their enemy’s humanity – a distance that allowed for killing at scales unfathomable only years earlier, culminating in hundreds of thousands of deaths resulting from the use of only two atomic bombs dropped on Hiroshima and Nagasaki. When images of the Vietnam War became widely available to the general public, the stream of body bags filled with American soldiers contributed significantly to the war’s growing unpopularity and ultimate abandonment in 1973. As a response, militaries around the world have invested heavily in increased protection for their combat troops, both to minimize losses to their military capital and to return most of its troops back to society when their tours of service are complete. The US wars in Iraq and Afghanistan were the pinnacle of this principle of risk reduction über alles – soldiers ensconced in impenetrable armored combat buses emerging only periodically layered in ballistic protection (including the combat diaper) to engage with the local population. Concurrently, drones piloted from thousands of miles away did much of the actual killing, further reducing the likelihood that any friendly troops were injured or killed.
Those that subscribe to the twin goals of increasing weapons’ lethality while reducing the risk to their own troops view LAWS as a positive development. Ronald Arkin claims that, in addition to reduced friendly casualties and increased targeting precision, LAWS may help address violations of International Humanitarian Law – war crimes – by removing the human component from the act of killing altogether. Arkin and his supporters cite troubling survey statistics of American Global War on Terrorism-era soldiers and marines: a tenth reported abusing captured enemy combatants and a third supported torturing noncombatants to save a fellow service member or obtain information about the enemy. Among the explanations for the persistence of war crimes are the desire to seek revenge, inconsistent leadership, poor training and inexperience. To these LAWS enthusiasts, automating the troops obviates these concerns. LAWS have no emotions and so cannot seek revenge; LAWS need no leadership and cannot be poorly led; LAWS need not be trained and thus cannot be inexperienced.
But LAWS supporters, giddy with the promise of an intelligent weapon system capable of being programmed to autonomously fight a crimeless war, should heed the cautionary tale of Collaborations Pharmaceuticals. An agent able to avoid civilian suffering can just as easily be calibrated to maximize it. Like the evil MegaSyn designed to inflict the maximum damage to the human body, a LAWS instrumented to wreak destruction could cause inconceivable amounts of harm. While the United States has repeatedly reaffirmed its commitment to International Humanitarian Law (albeit with a checkered record of punishing violations of it), we should not assume that other nations will incorporate it into the design of their LAWS. Russia, in its ongoing invasion of Ukraine, has frequently and deliberately targeted civilians, including bombing a Mariupol theater clearly marked Дети (children in Russian) for pilots, mass executions of bound noncombatants in Bucha, and bombing civilian housing in Sloviansk, Dnipro, Zaporizhzhia, and Kyiv. This is in line with Russia’s actions in Chechnya, Georgia, and Syria, and is part of a strategy of total war aimed at achieving its objectives via any means necessary. It stands to reason that, if and when they field LAWS, Russia and others will use them in a way that violates International Humanitarian Law, rather than upholds it.
Thus, while we might postulate how to construct a hypothetical ethical governor to create moral autonomous killing machines of our own, a more pressing concern is how to prevent others from creating immoral ones. Even assuming that LAWS might reduce unnecessary suffering and minimize war crimes when carefully calibrated to do so, the potential for LAWS to cause unprecedented levels of harm outweighs any potential benefit. The same traits that make LAWS attractive to their supporters – the lack of an emotional response to killing, the ability to operate independently for long durations without food or rest – make them brutally effective tools for causing the very harm ethical LAWS’ supporters mean to prevent. It is imperative that we recognize the potential for LAWS to be abused and work to stop them from being fielded before it is too late.