One of the most promising branches of the military technology at the present time is robotics. By true already made automatic machines capable to do different tasks. However, today’s unmanned aircraft and helicopters as ground tracked vehicles for all their potential, still can not work one hundred percent autonomously. Almost always limited autonomy by some actions that do not require what is called, a huge reason: moving to the item, tracking location, search for objects that stand out, etc. With regard to decisions on Fri route or an attack detected target, until they are accepted by the system operator, ie man. Completely automatic operation of military bots remains «domain» of science fiction, but scientists and engineers at the moment just make sure the first steps in this field. The development of robotic technology can impact not only on the ability of automatic systems, and on other aspects of human society.
In science fiction is often considered a severe issue of human-robot, which holds that artificial intelligence or other level. Existing state of affairs allows us to assume a gradual transition of this issue in real life. For this reason, already at this point some people and public organizations try to anticipate upcoming developments and, if possible, to make proper arrangements. Not so long ago, the human rights organization Human Rights Watch (HRW) released a report on this dilemma. In this paper, Losing Humanity: The Case Against Killer Robots («Losing humanity: arguments against killer robots») discusses the prospects of implementing fully autonomous combat bots also prepyadstviya that certainly, for the views of the creators of the report, there will be at their operation in the criteria of real conflicts. In addition, the report examines some legal nuances such «progress.»
First, the creators of the report «Losing Humanity» pointed to the fact that all the bots in one way or another autonomous differs only this level of independence. Because all bots with abilities of independent work, including fighting, conditionally divided into three groups: human in the loop (people in the control system), human on the loop (people over the system) and the human out of the loop (people outside the system control). In the context of fighting bots such division involves the following methods of work and levels of autonomy: if the human operator «is» in the control system, the bot without the help of others is the goal, and the man takes the team on their destruction. Two other types of combat bots can without the help of others to make decisions and attack, but the concept of human on the loop suggests the possibility of human control and allows the latter at any moment to adjust the bot act at their own discretion. Bots category human out of the loop absolutely independent and does not require any human control.
According to the views of employees HRW, the greatest danger in the future will represent bots third category, one hundred percent autonomous and uncontrollable man. Apart from the technical and moral problems marked related legal issues. Among other, at certain developments such fighting machines can greatly impact on all kind of fighting, including breaking the main international agreements. First, Human Rights Watch staff appeal to the Geneva Conventions, or rather to that part which obliges developers to inspect his guns on security for the civilian population. In HRW believe that combat mechanized equipment manufacturers are not interested in this issue and do not conduct any checks, which would entail the loss of the middle of the civilian population.
The main prerequisite to the risks associated with the use of mechanized warfare systems, said HRW staff lacking the level of development of promising bots. In their opinion, the combat bot, unlike man, is not guaranteed to be able to distinguish the enemy fighters from the peaceful inhabitant or vigorously resisting the enemy from either wounded prisoner. Because a very significant risk that the bots just will not take no prisoners, and will finish off the wounded. The creators of the report, apparently, did not adhere to the very best ideas about the future ability of bots and believe that promising combat systems fail in appearance and behavior to distinguish between armed and intensely active opponents of tough or behaves surprisingly peaceful inhabitant. In addition, experts defenders denied bots coming in predicting the behavior of the enemy’s capabilities. In other words, likely to occur when an enemy fighter who wants to surrender, lifting or throwing weapon that will meet the bot, and he will learn it properly and its storms.
Direct consequence of the lack of human traits, with the consequence of unsafe at Human Rights Watch said the use of bots in the operations of oppressed peoples freedoms and human rights. Human rights activists consider «soulless machine» perfect for inventory riots oppression, repression, etc., as opposed to human bot will not open a discussion order and fulfill all that he told.
In HRW fear that the corresponding feature of combat bots without human control will be the absence of any responsibility for their actions. If the operator remotely controlled drone struck at civilians, then for it to ask him. If similar commit sin bot that will punish some. A bot is not a sentient being, capable of understanding the nature of punishment and to reform and to use sanctions against the military sent him on a mission to the views of employees HRW — stupid, as well as to punish the developers of hardware and software bot. As a result, this paper can become a beautiful inventory for combat missions most disgusting way — with the help of war atrocities. In this case, all the facts will be laid bare the blame on faulty design or software failure, but an acknowledgment of guilt of certain people will be impracticable. So makarom what human rights activists fear, no one will suffer deserved punishment for the crime.
Because of the large risks organization Human Rights Watch calls on the country to abandon the development of one hundred percent autonomous combat bots and ban such equipment at the legislative level. As for the concepts human in the loop and human on the loop, it is the development of similar systems should be monitored and checked for conformity with international standards. Ie all important decisions should always take specific person owning appropriate knowledge and tolerance, but not automatic.
Based on current trends, far not all the leading countries fully agree with the report by HRW. By the time the true background is formed not only to create, and to very active use of automatic systems. In a number of cases of all kinds of application not only does not contradict internationalist humanitarian law, but even in some sense helps make it the norm. As an example of such work can lead Israeli missile defense system «Iron Dome.» Since this system is designed to intercept rockets with a range of small, its methods of work performed so makarom that most of the operations is automatic. In addition, when the respective team of operators can be automatically perform the entire cycle intercept enemy missiles from discovery to launch missiles. Due to this it is possible to destroy the enemy «Qassams» until they flew to human Fri As a result, the use of virtually autonomous bot Israel can not save their own lives and health of people, also save on the restoration of destroyed buildings.
Second reason in favor of continuing the development of automatic «soldiers» also has humanitarian background. Application of a huge amount of ground combat bots will allow fighters to abandon the living and save their lives. If the boat will be damaged in battle, it can be quickly repaired or written off for scrap and replace the new, completely analogous to the old one. Well, a similar technique to create a high order is easier and cheaper than to grow and teach fighter. Of course, the bot can recover in battle soon after assembly and after the birth of man must grow, to learn the simple ability to master a lot of different info and skills and only later he will be able to learn military affairs. So Makar, extensive use of live bots will help reduce the loss of manpower. In addition, to serve quite a huge fleet of mechanized «soldiers» useful comparable small number of operators, mechanics, etc. So with regard to the substitution of living fighter goes double mechanical win: life saved and cost savings.
About human rights concerns regarding excessive independence fighting bots in leading states have long harvested answer. For example, a few years back the U.S. released its development strategy of military automatic systems until 2036. Americans will primarily develop the so-called verifiably independent systems. Ie combat vehicles with the possibility of independent operation, but without the right to take harsh decisions. In the upcoming planned to put into operation the armed forces and completely independent machine, but the first models of a similar technique, the ability to really take on the duties of man, appear no earlier than 2020. So in the next few years or even decades on the battlefield is not a huge number of fully automated bots who do not know pity and mercy and can only make orders. All major decisions as before remain a duty.
With regard to giving greater autonomy to the bots must remember one quite fascinating world. His supporters believe that of combat systems must specifically exclude a person, not an automatic apparatus. As a confirmation of this thesis are «design flaws» of living people. Operator, managing the fighting robot, including one hundred percent controlling all his deeds may get sick, or even a mistake deliberately go for any criminal step. According to this view, the «weakest link» robotic combat complex is specifically live human operator, absolutely corresponding Latin proverb about the specific people mistakes.
Certainly, in the current time to fully understandable reasons the right to life are both points of view: how not to give bots offering freedom of action, and talking about the need for removal of the human system. Both of these representations have their pluses and minuses. Hardly recently completed a dispute to identify more promising and viable concept of introduction of combat bots. Find out who is right, there’s only one method: wait for the next event in the development of combat robotics. Hardly military world’s leading nations will choose unprofitable and difficult way of development of a promising direction. But at the moment is quite difficult to draw any conclusions. Likely in the coming years will continue the current trend. Remotely operated and autonomous limited equipment will continue to develop and will be used extensively in practice. Meanwhile, in the laboratories will be created entirely new hardware and software systems capable of acting absolutely independently. Current status of similar projects allows to imagine that within a few years all the responsibility for the acts of bots as before will take on people, and described in the report of Human Rights Watch difficulties yet remain a subject of enthusiastic defenders of science fiction and academics.