Lethal autonomous weapons (LAWs)

Lethal autonomous weapons (LAWs), sometimes called killer robots, are weapons that can identify and destroy a target without human intervention. In some cases, a person will activate the weapon, but it will choose a target and pursue and/or strike the target without further human intervention. For centuries, these have been relegated to the realm of science fiction. However, certain air defense systems in use today may qualify for this status and other autonomous systems are being developed around the world. These weapons raise questions because it is unclear whether they meet the standards of international humanitarian laws. There are also questions about how the weapons are triggered and whether they are accurate enough at identifying targets to keep innocent people out of danger. On the other side, there are arguments that these weapons make conflict more ethical because they remove human beings, and therefore human bias, from war.

rsspencyclopedia-20221220-27-193580.jpgrsspencyclopedia-20221220-27-193629.jpg

Background

The first lethal autonomous weapon may have been conceived of by Leonardo da Vinci in 1495, who made drawings of a knight controlled by a system of pulleys and cranks, though it remains unclear how it would have been powered.

Nicolai Tesla produced the first remote controlled boat in 1898 and pitched it to the United States military, but they did not think it was a serious option. In 1918, in World War I, the United States produced the Kettering Bug, a bomb with wings guided by an internal gyroscope. The Germans used the FL-7 motorboat that was guided by a wire and loaded down with bombs. These helped patrol the German coast, and their range increased when they became radio controlled in 1916.

In 1943, during World War II, Germany used what is possibly the first radio-controlled drone—the FX-1400 (aka the Fritz X)—to destroy Italy’s battleship, the Roma.

The 1950s saw the introduction of radio-controlled bombs and missiles, development of which was further fuelled by concerns about the Soviet Union in the 1960s. The United States did not want to get behind in an arms race, so they gave the Massachusetts Institute of Technology (MIT) $2 million to study computer science and artificial intelligence and how they might influence military technology.

The United States used laser guided weapons for the first time in 1972 in Vietnam and launched the first GPS satellite in 1973. The GPS system is now used to guide autonomous weapons. Since then, air defense systems have been placed on miliary vessels and in compounds, and the Predator drone was launched in 1994. These became consistent players in the war on terrorism.

As of January 2023, variations on lethal autonomous weapons are used around the world. South Korea has Samsung sentry robots lining the Demilitarized Zone, unmanned air combat vehicles are a reality, and the city of San Francisco nearly enacted a statute permitting their police to use remote controlled robots to kill suspects when other options had been eliminated, until public outcry caused them to rescind the policy.

The use of these weapons becomes more controversial the further they are removed from human control and the further humans are from the process of deciding who to attack and when.

Topic Today

While there is concern around the use of lethal autonomous weapons, most of what exists as of January 2023 does not qualify as such. Most drones, for instance, are flown by humans and their targets are chosen and attacked by humans. The robots that drew so much ire in San Francisco were piloted by humans as well. The sentry robots in South Korea can identify targets, but human intervention is required for an actual attack to take place. Even automatic missile defense systems do not qualify, because they are auto-targeting missiles, not human beings.

There are a few exceptions to this rule. A Turkish company has released the Kargus drone, which was used in Libya in 2021. It targeted retreating military combatants, which goes against international law. Israel is also believed to have used a swarm of AI-led drones in combat in May 2021. Other reports of LAW use exist but they have not been relied upon extensively as of January 2023.

Some organizations, like Human Rights Watch, are calling for a total ban on these weapons. They doubt that LAWs can successfully find the correct target, choose a proportional response in battle, and know when they are or are not a military necessity. Those in favor of a total ban on these weapons include the European Parliament, the Non-Aligned Movement, and the United Nations, as well as a variety of smaller organizations. Those opposed to a total ban include Russia and the United States, supported by Australia, Israel, and Britain. They prefer regulation over a ban.

As recently as November 14, 2022, the United States updated its policy on lethal autonomous weapons. These regulations require that all weapons have some human involvement in how, when, and where weapons are used. They do not say that humans must be in control of the weapon, but that weapon deployment remains a human jurisdiction. The policy also requires any updates to weapons, such as in increase in their reliance on artificial intelligence, to undergo testing before the weapon is used in combat.

While these documents seem to articulate an updated policy, there are unanswered questions. Since there is not even an internationally agreed upon definition of an autonomous weapon, it follows that any policy written when this one was will have gaps and leave questions.

There is also an argument that it is possible to program an ethics-based code into these LAWs, or that this capability is in the works and will be possible in the near future. Some ethicists argue that, once this can be reliably done, these weapons will wage more ethical war than humans can. Since the weapons will always have to choose according to their programmed principles, they will always make the best possible choice.

As of January 2023, international talks regarding the use and regulation of lethal autonomous weapons are ongoing. They are made more complex by the fact that the technology is developing quickly and more of these weapons can become available at any time.

Bibliography

Allen, Gregory C. “DOD Is Updating Its Decade-Old Autonomous Weapons Policy, but Confusion Remains Widespread.” Center for Strategic and International Studies, 6 June 2022, www.csis.org/analysis/dod-updating-its-decade-old-autonomous-weapons-policy-confusion-remains-widespread. Accessed 18 Jan. 2023.

“Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems.” USNI News, 14 Nov. 2022, news.usni.org/2022/11/17/defense-primer-u-s-policy-on-lethal-autonomous-weapon-systems. Accessed 18 Jan. 2023.

Hambling, David. “Israel Used World's First AI-Guided Combat Drone Swarm in Gaza Attacks.” New Scientist, 30 June 2021, www.newscientist.com/article/2282656-israel-used-worlds-first-ai-guided-combat-drone-swarm-in-gaza-attacks/. Accessed 18 Jan. 2023.

“Killer Robots.” Human Rights Watch, www.hrw.org/topic/arms/killer-robots. Accessed 18 Jan. 2023.

McCormick, Ty. “Lethal Autonomy: A Short History.” Foreign Policy, 24 Jan. 2014, foreignpolicy.com/2014/01/24/lethal-autonomy-a-short-history/. Accessed 18 Jan. 2023.

Morris, J.D. “S.F. Halts ‘Killer Robots’ Policy After Huge Backlash – For Now.” San Francisco Chronicle, 6 Dec. 2022, www.sfchronicle.com/bayarea/article/S-F-halts-killer-robots-police-policy-17636020.php. Accessed 18 Jan. 2023.

Russell, Stuart. “Banning Lethal Autonomous Weapons: An Education.” Issues in Science and Technology, Spring 2022, issues.org/banning-lethal-autonomous-weapons-stuart-russell/. Accessed 18 Jan. 2023.

Umbrello, Steven, et al. “The Future of War: Could Lethal Autonomous Weapons Make Conflict More Ethical?” AI & Society, 6 Feb. 2019, doi.org/10.1007/s00146-019-00879-x. Accessed 18 Jan. 2023.

“What You Need to Know About Autonomous Weapons.” International Committee of the Red Cross, 27 July 2022, www.icrc.org/en/document/what-you-need-know-about-autonomous-weapons. Accessed 18 Jan. 2023.