THE INHERENT CHARACTERISTICS OF AUTONOMOUS WEAPONS CREATE SERIOUS DOUBTS ON THEIR COMPATIBILITY WITH INTERNATIONAL HUMANITARIAN LAW.


LAW OF WAR; AUTONOMOUS WEAPONRY AND INTERNATIONAL HUMANITARIAN LAW

Autonomous weapons can be described as weapons which once they are  activated, they select targets and engage them with violent force without further intervention by human operators[1]. Autonomy is a matter of degree, it ranges from weapons which operate with some degree of human oversight and those that operate completely independent. These autonomous weapons pose considerable challenges for international humanitarian law (IHL), in particular to the principles of distinction, proportionality and precaution. The question with regards to these weapons, is whether they will be able to satisfy the minimum requirements of IHL in the same manner that humans are capable of doing, although they sometimes wantonly disregard them. The minimum requirements are as follows; ability to distinguish between military and non-military persons and objects, ability to determine the legitimacy of targets make proportionality decisions, ability to adapt to changing circumstances and handle unanticipated actions of an adaptive enemy. Therefore when we speak of compatibility with IHL, this paper refers to the above criteria.
OUTLINE
The enquiry into the issue of whether autonomous weapons create serious impediments to the realization of IHL and its aims, will proceed first on the understanding that reference is being made to fully autonomous weapons. Proceeding on that basis, the nature of fully autonomous weapons as we have them now, will be examined vis-a-vis the minimum requirements of IHL set out above. The reason for the focus on fully autonomous weapons is that we already have partially autonomous weapons in use and they have been found to be capable of being tailored to comply with the dictates of IHL.
Ability to distinguish between military and non military objects and persons.
This requirement finds its origin in the principle of distinction and under this principle there are many sub-rules which constitute the principle of distinction.
The parties to a conflict must at all times distinguish between civilians and combatants. Attacks may only be directed against combatants. Attacks must not be directed against civilians, unless for such a time as they take a direct part in hostilities[2]. This rule is to be read in conjunction with the prohibition on the attack of peoples recognised as hors de combat.[3]
Prohibition on the acts or threats of violence, the primary purpose of which is to spread terror among the civilian population[4]. in the cases of Dukic, ICTY and Karadzic and Mladic case, the above mentioned acts were held to include the following; unlawful firing on civilian gatherings and a protracted campaign of shelling and snipping upon civilian areas and indiscriminate firing on civilian targets.
The principle of distinction also involves in its periphery, the ability to define combatants and civilians for the purposes of giving effect to the above rules. Article 43(2) of additional protocol 1 states that all members of the armed forces of a party to the conflict are combatants, except medical and religious personnel. Civilians on the other hand, are vaguely defined as follows; civilians are persons who are not members of the armed forces. The civilian population comprises all persons who are civilians. [5] The ICTY, Blaskic case [6]defined civilians as persons who are not, or are no longer members of an armed force.
The exception being the levee en masse, whereby the inhabitants of a country which has not been occupied, spontaneously take up arms to resist an oncoming enemy without having the time to form themselves into troops.
Our notions of autonomous robots are certainly biased due to representations in science fiction and in movies. As a result we tend to attribute human capabilities, such as intelligence or cognitive reasoning, to robots. However,
it is important to realize that, to date, there is no system with such capabilities.
Several characteristics of autonomy in robotic systems are particularly relevant for consideration of potential military applications i.e. how much variation or how many unknowns in the environment can be tolerated while still ensuring good performance? How versatile is the robot? How many tasks can the robot perform? Can it learn new tasks for which it was not programmed? For each of these questions there is a continuum of possibilities.
The set of questions above clearly expose the fallibility that will meet autonomous systems in the battlefield, the esteemed researcher on robotics mentioned that even though we have made significant strides as humans in creating autonomous systems, these systems are only good for the purposes for which they were made for and not necessarily able to adapt and be flexible enough to accommodate the complexities and uncertainties of war[7]. Basically autonomous weapons as we have them now, lack adaptability, a feature that is inherent in humans and makes the rules of IHL followable.
Because of lack of the adaptibility feature, autonomous systems are likely to fail to decide when civilians lose their right to protection under IHL, when they can be said to have directly taken part in hostilities, they may also fail to distinguish between combatants and civilians were they are crowded in one place. All of which can be done by humans. There is also high likelihood that the autonomous weaponry will fail to deal with the levee en masse situation described above. The continuum of possibilities for failure continue to grow as one digs deeper into the intricacies involved in placing in battle the inchoate autonomous technologies so far developed. We have only gone as far as developing autonomous vacuum-cleaning robots on wheels, which can be aptly described as an autonomous machine. Once activated it will sweep the apartment and even find its recharging deck when its battery level gets low. However, the robot can only accomplish one task. It behaves in a manner pre-programmed by the engineers, has a limited scope of action, and cannot understand its surroundings or make complex decisions[8].
With this level of technological advancement, it is not advisable to venture into autonomous warfare, for fear of the possibilities that may result and their effect on humanity and the implications of IHL.
Building on the above idea, the fact that these autonomous weapons are pre-programmed and are unable to adapt makes it difficult to enforce IHL rules in courts. Apart from employing the principle of state responsibility, use of autonomous weapons will make it difficult for the courts to implement the universal legal principles of reasonableness and blameworthiness whenever a wrong is committed against humanity. Machines especially those operating “without any humans in the loop”, are devoid of the ability to make critical decisions surrounding the taking of life. It therefore becomes apparent that the use of autonomous weapons as they are now will be the ultimate defeat of IHL and all it stands for.

Ability  to make proportionality decisions.
The source of this minimal requirement, if autonomous systems are to be amenable with IHL finds its origin in one of the primal principles of IHL- Proportionality. IHL prohibits the launching of an attack which may be expected to cause incidental loss to civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated[9]. The nuclear weapons case before the international court of justice held that “the applicability of the principle of proportionality is inclusive of the duty to respect the environment as one of the elements that go into the proportionality assessment”. Commenting on the interpretation of the provision that provides for the principle of proportionality, the Commentary on the Additional Protocols stated that “concrete and direct” military advantage was used in order to indicate that the advantage must be “substantial and relatively close, and that advantages which are hardly perceptible and those which would only appear in the long term should be disregarded”.
As is apparent from the above requirements and expositions on the full nature and extent of the principle of proportionality, the level of autonomous technology that we have as of now, is so underdeveloped to be able to bring to fruition this principle. The decisions regarding the use of proportional force require the deliberative reasoning of an experienced human commander who must balance civilian lives and property against direct military advantage. A human can even reason about their reasons for choices before making a decision (meta-cognition). These are not strengths of computing. Machines will not be able to make the fine distinctions and reason in the way required of them by IHL rules and principles.
A brief outline on the level of autonomous technology that we have has been laid out in the preceding paragraphs, and it is clear that the technology is not developed to such an extent as to be able to match human capabilities in warfare and in observance of IHL rules. Moreover, the issues of taking life in law require a moral justification, which humans can do. The morality inherent in humans is a check against the unjustifiable loss of life. Machines on the other hand inherently lack such characteristics, the consequence therefore, of fielding autonomous weapons in the battlefield where crucial decisions of human life are to be made may be catastrophic.

Ability  to take precautions.
In the conduct of military operations, constant care must be taken to spare the civilian population, civilians and civilian objects. All feasible precautions must be taken to avoid, and in any event to minimize, incidental loss of civilian life, injury to civilians and damage to civilian objects[10]. Each party to the conflict must do everything feasible to verify that targets are military objectives.[11]
Some of the demands to the duty to take precautions require an overall assessment of the circumstances and an ability to decide upon that basis. Dr Ludovic Righetti a researcher in autonomous systems remarked that “there are scientific challenges that we do not yet know how to solve. These include, for example: creating algorithms that can understand the world at a human level or reason about complicated tasks during manipulation[12] what the researcher means is as of now there are no autonomous systems that can understand and perceive the world at human level. On the other hand, the principle stated above requires that the machine, if it is to be fielded, be able to understand the world at human level and be able to assess all the circumstances and take feasible precaution before launching an attack. Without that ability, which has not yet found a solution, it means that autonomous weapons cannot be fielded in battle.
In conclusion the level of technological advancement currently possessed by autonomous systems make it impossible to permit them into the realm of warfare, for the precise reason that they will fail to make the fine distinctions required by IHL as a body of Law and to realize and appreciate the subtleties involved in decision making in a war set up. Furthermore, IHL was designed for a capable human mind, developed over millions of evolutionary years, the intelligence it presupposes in its provisions forms a very hefty mark that robotics and autonomous systems will not realize in the foreseeable future. To realize the aims of IHL, it is advisable to shelve the idea of employing fully autonomous systems in warfare.











Bibliography
Civilian Robotics And Developments In Autonomous Systems; Dr Ludovic Righetti, Max Planck Institute for Intelligent Systems, Germany.
Autonomous weapon systems: Technical, military, legal and humanitarian aspects. Expert meeting, Geneva, Switzerland, 26-28 March 2014.
D B, Larter, U.S. Navy Moves Toward Unleashing Killer Robot Ships on the Worlds Oceans, Defense News, January 15, 2019.
UN Office at Geneva, The Convention on Certain Conventional Weapons,  https://www.unog.ch/80256EE600585943/(httpPages)/4F0DEF093B4860B4C1257180004B1B30 (accessed 30 September 2019).
Geneva Convention (I) on Wounded and Sick in Armed Forces in the Field, 1949 (12.08.49).
Geneva Convention (II) on Wounded Sick and Shipwrecked of Armed Forces at Sea (12.08.49).
Geneva Convention (III) on Prisoners of War, (12.08.49).
Geneva Convention(IV) on Civilians, (12.08.49).
Additional Protocol (I) to the Geneva Conventions, (1977)





[1] Noel Sharkey, University of Sheffield, UK Autonomous Weapons and Human Supervisory Control TECHNICAL, MILITARY, LEGAL AND HUMANITARIAN ASPECTS EXPERT MEETING GENEVA, SWITZERLAND 26 to 28 MARCH 2014
[2] Articles 48, 51(2) & 52(2) of the Additional Protocol I
[3] Common Article 3 of the Geneva Conventions.
[4] Article 51(2) additional Protocol 1
[5] Article 50 of additional protocol 1
[6] Blaskic case 2000.
[7] Ludovic Righetti, Max Planck Institute for Intelligent Systems, Germany, CIVILIAN ROBOTICS AND DEVELOPMENTS IN AUTONOMOUS SYSTEMS.
[8] Ludovic Righetti, Max Planck Institute for Intelligent Systems, Germany, CIVILIAN ROBOTICS AND DEVELOPMENTS IN AUTONOMOUS SYSTEMS.

[9] Article 51 (5) b of additional protocol 1 to the Geneva Conventions, repeated again in article 57.
[10] Article 57(1) Additional Protocol l to the Geneva Conventions.
[11] Article 57 (2) Additional Protocol l to the Geneva Conventions.
[12] Ludovic Righetti, Max Planck Institute for Intelligent Systems, Germany, CIVILIAN ROBOTICS AND DEVELOPMENTS IN AUTONOMOUS SYSTEMS.

Comments

Popular posts from this blog

Can a Vice President hold a Ministerial Post

Vision 2030

Mthwakazi Republic.

Contact Us

Name

Email *

Message *