6. UN Convention on CCW and all states shall prohibit developing or deploying lethal autonomous weapons.

Read Article | Comments


Rapporteurs: Erin Hunt and Gerardo Lebron Laboy, Mines Action Canada

I. THE PROBLEM

Lethal Autonomous Weapons (LAWs) refers to future weapons that would select their targets and engage (kill) based on their programming. They will be “autonomous” in the sense that they would not require human intervention to actuate (act or operate according to its programming).(1) Being solely algorithmic driven, LAWs will be able to kill without any human interference or oversight.

The following arguments have been offered in support of the development of LAWs:

LAWs technology could offer better military performance and thus enhance mission effectiveness
  • LAWs, being a product of robotics, could be faster, stronger, and have better endurance than human soldiers in every perspective, not being subject to fatigue.
  • Better environmental awareness; robotic sensors could provide better battlefield observation.
  • Higher and longer range precision: Also, given advanced sensor technology, LAWs could have better target precision and a longer range.
  • Better responsiveness: LAWs will not be subject to the uncertainty in situational awareness that participants in military operations may go through because of communication problems or sight or vision obstructions (fog of war). Through an interconnected system of multiple sensors and intelligence sources, LAWs could have the capacity to update instantly more information than humans and faster, which would enable better awareness of their surroundings.
  • Emotionless advantage: LAWs would not have emotions that cloud their judgement.
  • Self sacrificing nature: LAWs would not have a self-preservation tendency and thus could be used in self sacrificing manners if needed and appropriate.
Read more

Subscribe
Notify of
14 Comments
Inline Feedbacks
View all comments

Are these REALLY immoral?

I found this interesting that morality was brought up – why is delegating killing to machines immoral? After all, I believe that humans would need to program the machines to only kill in certain circumstances- therefore the humans make the decision when to kill and under what circumstance. Therefore, the humans really decide…

Alarming US acceptance of Landmine Use

Here’s an excerpt from the World Federalist newsletter.

Article Excerpt(s):

At the end of January, US President Donald Trump reversed the Obama-era ban on the use of landmines (other than in the defence of South Korea).

The brief statement from the White House says, “The Department of Defense has determined that restrictions imposed on American forces by the Obama Administration’s policy could place them at a severe disadvantage during a conflict against our adversaries. The President is unwilling to accept this risk to our troops.”

Read more

So you’re telling me that Trump and the USA are willing to just use any type of weapon as long as it gives them the advantage?!!!!

Why don’t nations just ban killer robots?

“Nations dawdle on agreeing rules to control ‘killer robots’ in future wars”
by Nita Bhalla, Reuters [17 January 2020]

“Countries are rapidly developing “killer robots” – machines with artificial intelligence (AI) that independently kill – but are moving at a snail’s pace on agreeing global rules over their use in future wars, warn technology and human rights experts.

Read more

At the end of the day, it’s all just a security dilemma. All countries have to decide to ban these weapons together, otherwise none will- because then certain countries will have the advantage. After all, these weapons give such an advantage…but at what cost?

This is a related article discussing the issue of cyber weapons and how they might participate on the battlefield.

Read more

That’s an understatement, Richard. It’s about a woman who quit Google last year because of their military project. She says that AI can accidentally start a war.

terminatorrobot-580x358.jpg

Coming Soon to a Battlefield: Robots that Can Kill

By Zachary Fryer-Biggs
The Atlantic, Sept 3, 2019.

The U.S. Navy ‘s ship Sea Hunter patrols the oceans without a crew, looking for submarines that, one day, it may attack directly. nd the U.S. Army has a missile system that, without humans, can pick out vehicles to attack. So what do we think of such things? And what can we do about it? Here’s what Zachary Fryer-Biggs wrote in The Atlantic:

Read more

And Artificial Intelligence itself is supposed to be a real threat to humanity, according to some theorists. But maybe not quite as soon as killer robots.

Autonomous weapons that kill must be banned, insists UN chief

UN Secretary-General António Guterres urged artificial intelligence (AI) experts meeting in Geneva on Monday to push ahead with their work to restrict the development of lethal autonomous weapons systems, or LAWS, as they are also known.

In a message to the Group of Governmental Experts, the UN chief said that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”.

No country or armed force is in favour of such “fully autonomous” weapon systems that can take human life, Mr Guterres insisted, before welcoming the panel’s statement last year that “human responsibility for decisions on the use of weapons systems must be retained, since accountability cannot be transferred to machines”. . . .

Read more

Lethal Autonomous Weapons (LAWs) are aptly called “killer robots,” though they don’t actually look like Arnold Schwartznegger. They decide whom to kill without consulting a person. You’d never want to get into a fight with one.