6. UN Convention on CCW and all states shall prohibit developing or deploying lethal autonomous weapons.

Read Article | Comments


Rapporteurs: Erin Hunt and Gerardo Lebron Laboy, Mines Action Canada

I. THE PROBLEM

Lethal Autonomous Weapons (LAWs) refers to future weapons that would select their targets and engage (kill) based on their programming. They will be “autonomous” in the sense that they would not require human intervention to actuate (act or operate according to its programming).(1) Being solely algorithmic driven, LAWs will be able to kill without any human interference or oversight.

The following arguments have been offered in support of the development of LAWs:

LAWs technology could offer better military performance and thus enhance mission effectiveness
  • LAWs, being a product of robotics, could be faster, stronger, and have better endurance than human soldiers in every perspective, not being subject to fatigue.
  • Better environmental awareness; robotic sensors could provide better battlefield observation.
  • Higher and longer range precision: Also, given advanced sensor technology, LAWs could have better target precision and a longer range.
  • Better responsiveness: LAWs will not be subject to the uncertainty in situational awareness that participants in military operations may go through because of communication problems or sight or vision obstructions (fog of war). Through an interconnected system of multiple sensors and intelligence sources, LAWs could have the capacity to update instantly more information than humans and faster, which would enable better awareness of their surroundings.
  • Emotionless advantage: LAWs would not have emotions that cloud their judgement.
  • Self sacrificing nature: LAWs would not have a self-preservation tendency and thus could be used in self sacrificing manners if needed and appropriate.
Read more

Subscribe
Notify of
9 Comments
Inline Feedbacks
View all comments

Check out this interesting article from the World Federalist Movement (Canada Chapter) on the United States reversing their position on the prohibition of landmines.

Title: TakeAction: United States removes restrictions on landmine use: What should Canada do?
Author: World Federalist Movement (Canada Chapter)
Publication(s): World Federalist Movement (Canada Chapter)
Date: 12 March 2020
Link: https://www.wfmcanada.org/2020/03/3345/

Article Excerpt(s):

At the end of January, US President Donald Trump reversed the Obama-era ban on the use of landmines (other than in the defence of South Korea).

The brief statement from the White House says, ” The Department of Defense has determined that restrictions imposed on American forces by the Obama Administration’s policy could place them at a severe disadvantage during a conflict against our adversaries. The President is unwilling to accept this risk to our troops.”

A new US policy on landmines “will authorize Combatant Commanders, in exceptional circumstances, to employ advanced, non-persistent landmines specifically designed to reduce unintended harm to civilians and partner forces.”

The United States has not signed the 1997 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines (otherwise known as the Ottawa Treaty or Mine Ban Treaty), although 164 other countries have done so.

In a statement from Human Rights Watch, Steve Goose, director of HRW’s arms division said, “Most of the world’s countries have embraced the ban on antipersonnel landmines for more than two decades, while the Trump administration has done a complete about-face in deciding to cling to these weapons in perpetuity. Using landmines, which have claimed so many lives and limbs, is not justified by any country or group under any circumstances.”

HRW points out that “in recent years, landmines have only been used by regimes known for their human rights abuses in Burma and Syria, and by non-state armed groups like ISIS” and that the US has not actually used landmines since 1991, exported them since 1992, produced them since 1997, and have, in fact, destroyed millions that were stockpiled.

In an op-ed in the Globe and Mail, former Canadian Foreign Affairs Minister Lloyd Axworthy and John English, a former special ambassador on landmines, address the “reducing risk” rationale, which was, as they put it, “debunked effectively during the debate on the landmine treaty negotiations in the 1990s, when the International Committee of the Red Cross, supported by senior U.S. army commanders … pointed out that the weapons were a huge risk to civilians and soldiers alike.”

They also say that the US is not acknowledging the impact and effectiveness of the Ottawa Treaty, including the drop in annual rates of those injured or killed by landmines which have accompanied the major de-mining projects still in process.

The lifting of US restrictions “gives licence to rogue combatants around the world, to say nothing of major powers such as Russia and China, which will now feel free to amend their own no-use policies.”

In an op-ed published by Postmedia in their various newspapers, Erin Hunt (Mines Action Canada) and Liz Bernstein (Nobel Women’s Initiative), in addition to raising similar points to Axworthy and English about the reduction in casualties and demining projects, also discuss the failure of “technological fixes,” such as the the self-destruct mechanism in non-persistent landmines, to reduce casualties.

Hunt and Bernstein also call on the Canadian government to publicly support the ban and fund demining projects, as well as assist victims. They suggest that Canada take on the role of president of the Ottawa Treaty, an annual commitment Canada has never assumed.

What you can do

Write to Foreign Affairs Minister François-Philippe Champagne and suggest that Canada publicly re-affirm its commitment to upholding the Ottawa Treaty Banning Anti-Personnel Landmines, and confirm its financial support for de-mining activities around the world.

Here is an interesting article on lethal autonomous weapons. It popped up on social media and seemed relevant to this project.

Link: https://www.reuters.com/article/us-global-rights-killer-robots/nations-dawdle-on-agreeing-rules-to-control-killer-robots-in-future-wars-idUSKBN1ZG151

“Nations dawdle on agreeing rules to control ‘killer robots’ in future wars” by Nita Bhalla in Reuters [17 Janury 2020]

Some excerpts:

” Countries are rapidly developing “killer robots” – machines with artificial intelligence (AI) that independently kill – but are moving at a snail’s pace on agreeing global rules over their use in future wars, warn technology and human rights experts.

From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare – but they all have human supervision.

Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own – but to date there are no international laws governing their use.”

[…]

“The ICRC oversaw the adoption of the 1949 Geneva Conventions that define the laws of war and the rights of civilians to protection and assistance during conflicts and it engages with governments to adapt these rules to modern warfare.

AI researchers, defence analysts and roboticists say LAWS such as military robots are no longer confined to the realm of science fiction or video games, but are fast progressing from graphic design boards to defence engineering laboratories.”

[…]

“You simply can’t trust an algorithm – no matter how smart – to seek out, identify and kill the correct target, especially in the complexity of war,” said [Noel] Sharkey, who is also an AI and robotics expert at Britain’s University of Sheffield.”

[…]

““We need to have a new international treaty as we have for landmines and cluster munitions. We have to prevent the avoidable tragedy that is coming if we do not regulate our killer robots.”

https://www.computing.co.uk/ctg/news/3081583/autonomous-weapons-ai-war-google

This is a related article discussing the issue of cyber weapons and how they might participate on the battlefield.

That’s an understatement, Richard. It’s about a woman who quit Google last year because of their military project. She says that AI can accidentally start a war.

terminatorrobot-580x358.jpg

Original author: Zachary Fryer-Biggs, Coming Soon to a Battlefield: Robots that Can Kill,” The Atlantic, Sept 3, 2019.

The U.S. Navy ‘s ship Sea Hunter patrols the oceans without a crew, looking for submarines that, one day, it may attack directly. nd the U.S. Army has a missile system that, without humans, can pick out vehicles to attack. So what do we think of such things? And what can we do about it? Here’s what Zachary Fryer-Biggs wrote in The Atlantic:

https://www.theatlantic.com/technology/archive/2019/09/killer-robots-and-new-era-machine-driven-warfare/597130/

And Artificial Intelligence itself is supposed to be a real threat to humanity, according to some theorists. But maybe not quite as soon as killer robots.

Autonomous weapons that kill must be banned, insists UN chief | 25 March 2019 | Culture and Education

UN Secretary-General António Guterres urged artificial intelligence (AI) experts meeting in Geneva on Monday to push ahead with their work to restrict the development of lethal autonomous weapons systems, or LAWS, as they are also known.

In a message to the Group of Governmental Experts, the UN chief said that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”.

No country or armed force is in favour of such “fully autonomous” weapon systems that can take human life, Mr Guterres insisted, before welcoming the panel’s statement last year that “human responsibility for decisions on the use of weapons systems must be retained, since accountability cannot be transferred to machines”. . . .
https://news.un.org/en/story/2019/03/1035381

Lethal Autonomous Weapons (LAWs) are aptly called “killer robots,” though they don’t actually look like Arnold Schwartznegger. They decide whom to kill without consulting a person. You’d never want to get into a fight with one.