Drone attacked soldier without prior consulting with humans
According to the UN (United Nations), there has been at least one incident of an autonomous drone’s attack targeted at people. The drone was operated by Artificial Intelligence. It happened in March 2020 in Libya and might have killed humans. There was no previous consulting with people before the attack.
The incident took place in the course of the conflict between the Libyan government and a breakaway military faction, led by the Libyan National Army’s general Khalifa Haftar. The attack was carried out by a quadcopter, named Kargu-2.
Developed in Turkey, these four-rotor drones were employed to track down retreating enemies and not let them exploit the drones of their own. Kargu-2 were programmed to attack if they lose connection with the human shooter.
Autonomous weapons are not a brand new notion. Let’s take, for instance, landmines, which go off when one steps on them. It is a kind of a simple autonomous weapon. The new thing here is that these drones are equipped with artificial intelligence, according to Zachary Kallenborn, a research affiliate with the National Consortium for the Study of Terrorism and Responses to Terrorism at the University of Maryland.
Kallenborn claims it may be the first incident of a drone attack without being consulted by humans. He also worries about the future: is the object recognition system perfect? Do drones never misidentify objects? How well-tested are they? How widely available are they? What role do people play? The questions remain open.
Kallenborn says that Artificial Intelligence depends on the information it is taught. He explains it in the example with cats and dogs. A machine knows that it’s a cat and that’s a dog if it’s taught this data. But sometimes the information may be incomplete or the objects might be not so simple. If one pixel is changed, the system may consider it is a completely different image. So what can happen on a battlefield with a constantly changing and moving picture?
A lot of influential people, including Elon Musk and Stephen Hawking have asked to prohibit autonomous weapons. According to Jack Watling, a researcher on land warfare at the Royal United Services Institute, there is an urgent need to discuss the use of autonomous weapons. Human Rights Watch has launched a campaign to prevent their production and use. Generally, the worries about autonomous weapons are a part of much bigger fears of mankind – fears of artificial intelligence.
Kargu-2 have been created for anti-terrorist operations. The lethal drone in question hunted down one of Khalifa Haftar’s soldiers, who wanted to retreat.
How does Kargu-2 function? Well, the shooter sets up the target’s coordinates on its software and launches it. The drone then moves towards those coordinates at a maximum speed of 72 km per hour, identifies them, attacks the target, and explodes. Military men call this procedure “fire and forget”. It means after shooting they can do other staff – prepare for the following actions, move to another place or even relax or have a snack. Quite convenient, isn’t it?
It is unknown whether Turkey controlled the deadly drone or it was sold to the Government of National Accord. Anyway, it breaks a UN arms embargo, according to which no member state (including Turkey) is allowed to sell weapons to Libya. This ban came into force after Libya’s brutal suppression of protesters in 2011 which caused the civil war in Libya.
Have we got to be afraid? Before this incident in Libya, mankind was in full control of machines. And now? Machines may decide to kill by themselves.