The proximity of the Yom Kippur

The proximity of the Yom Kippur 

Human rights activists feared uprising machines

International non-governmental organization Human Rights Watch, together with the Harvard Law School published a report on the threat or use of completely autonomous combat bots and mechanized weapons and urged all governments to abandon their development. Report submitted to the organization for 50 pages, but its essence can be expressed in one sentence — independent bots will kill indiscriminately. Military is just insist that artificial intelligence and bots will significantly reduce collateral damage and civilian casualties in the midst of the population.

According to the report «Losing humanity: The reasons against killer robots» (Losing Humanity: The Case Against Killer Robots), cooked Human Rights Watch (HRW), one hundred percent autonomous fighting bots or weapons systems will be made and adopted already in the coming 20-30 years. They are engaged in the development of more or less on the technical level, advanced countries, the United States first, more than any other closer to creating autonomous drones. Exclude a person from management systems weapons and military equipment, according to the views of the creators of the report, it is impossible, as in this case would be immediately violated several provisions of international humanitarian law.

War law

After a series of research, HRW concluded that «the essence of mechanisms bots, kitted out to receive the world around us and act in accordance with the programs.» They are all in one degree or another own autonomy, in other words able to produce any act without human intervention. Degree of autonomy from the drones may vary significantly depending on the model. Bots can be conditionally divided into three categories: «man-in-the-system-management» (Human-in-the-loop), «man-of-system-management» (Human-on-the-loop) and «man-out-of-control» (Human-out-of-the-loop).

First category suggests that some unmanned machine can not detect and help others choose the goal, but the decision on their winding up takes only a human operator. To the second category includes systems that can detect without the help of others and choose goals and make decisions about their winding up, but the human operator, acting as observer, at any moment can interfere with this chain. Finally, the third category included HRW bots that can detect, select and destroy targets without any human intervention.

A day or doomsday machine. In the 1970s, the Soviet Union has developed a system 15E601 «Perimeter», which is a complex automatic control massive nuclear retaliation. According to official data, «Perimeter» responsible for bringing the military orders to all team Fri launchers and strategic missiles in the event of damage to the forward links. According to unconfirmed version conspiracy, «Perimeter» Judgment is a day or one Machines capable to absolutely automatically without human intervention to initiate a massive nuclear strike in response to such a blow to the enemy. In December 2011, the commander of the Strategic Missile Forces Sergei Karakayev said that a system exists and is on the alert. It is believed that a similar system a retaliatory nuclear strike and the USA.

From the perspective of international humanitarian law, particularly the latter category of autonomous combat systems poses the greatest danger to the civilian population in the war zone. In the end, most of the international «laws of war» specifically devoted to the protection of the civilian population from the effects of hostilities. According to the HRW, autonomous combat boots not only do not meet the requirements of international humanitarian law, but may also provoke his subsequent violation, or even very rejection of existing treaties and conventions.

First, the use of mechanized systems with the highest degree of autonomy violates key provisions of the Geneva Conventions on the protection of victims of international armed conflicts, which entered into force in 1950 and supplemented by 3 main protocols in 1977 and 2005. Thus, Article 36 of Protocol I to the Geneva Conventions have requested that all countries «in the study, development, acquisition or adoption of new types of weapons, means or methods of warfare» inspected on their compliance with international law.

So, all kinds of guns have created at each step of development to pass inspection and assessment of the danger to the civilian population in the event of their combat deployment. For example, views on human rights defenders, were not tested similar cluster munitions, unexploded submunitions which only increased the number of victims in the midst of the civilian population. By the way, for the detection of unexploded frisky «bombochek» sappers were painted in their brightest colors that lure kids and only increased loss of civilian population. Currently cluster munitions equipped self-destruct timers and do not represent a great danger than an ordinary bomb.

Countries are not going to spend an adequate assessment and promising bots compliance of international law, according to HRW. In addition, autonomous bots will not only violate the provisions of the Geneva Conventions, and the requirements of the Declaration Martens (represented initially 1900s Russian lawyer Fyodor Martens, formed the basis of the Convention on the Laws and Customs of War on Land). According to this declaration, «inhabitants and the belligerents remain under the protection and the rule of the principles of international law, as they are derived from established between civilized peoples customs, from the laws of humanity and dictates of public conscience.»

In general, autonomous bots will pose a direct danger of noncombatants or exited from the military conflict population. According to the views of HRW, such systems «do not own human feelings» and not able to associate and to compare the actions of people. And, as the bots can not sympathize, they will kill the wounded and retired from fighting (stacked weapon) enemies, although it is prohibited by the Geneva Conventions.

With all of this autonomous bots, no matter how perfect the artificial intelligence they may possess, how many fail, or the highest level of behavior are considered people in front of them. According to HRW, if the soldier-man will not be difficult to distinguish «fearsome civilian» from the «party presenting the danger of hostilities», the bot is not capable. In addition, the bots will not be able to foresee the consequences of action or other rights and how to not be able to recognize the change in the mood of the enemy, if he confrontation with the bot you want, for example, to surrender.

In addition, the lack of bots will also make their feelings pretty inventory repression and establish a dictatorship — drones just will not come in the «head» rebel against ruthless orders «bosses.»

In the end, with wide use of autonomous weapons classified as «man-out-of-control» impossible to install guilty in the death of civilians. Presumably, the responsibility in such cases could be given to the officer who took the decision to use a bot programmer, producer or even the bot. All of these options, according to HRW, unacceptable. In the illegal and violent actions «would be difficult and unfair» accused officer, programmer or producer, and the punishment and the bot seems quite absurd. So makarom victims of hostilities lose their legitimate right to justice.

Based on all of these in their own Rezonit HRW report calls on the governments of all countries in the world at the legislative level to ban the development, creation and implementation of autonomous bots and weapons systems, and painstakingly created to evaluate all the technologies that one way or another may be used in the construction of independent drones. In addition, the development of robotics and weapons HRW urged carefully inspect their projects in compliance with state and international humanitarian law.

Without human

Meanwhile, the creation of autonomous mechanized systems is considered by many governments of priority. So, in late 2010, the U.S. Defense Department announced the «Plan of development and integration of autonomous systems for 2011-2036 years.» According to this document, the number of air, land and underwater autonomous systems will be significantly increased, while the developers put puzzles give these devices «supervised independence» (All acts holds the key people), but in the end — and «full autonomy.» With all of this U.S. Air Force believed that a promising artificial intelligence in the process will be able to fight without the help of others to make decisions that do not violate the law.

However, with all this, the Pentagon said that in the foreseeable future in the application of autonomous systems the decision to use tools and selecting targets will remain under human control. A similar position is held and the Ministry of Defence of England, in 2011, announced that the country is not interested in the development of a fully independent mechanized systems. According to the HRW, such statements are laudatory, but missing — in the eyes of future military departments to self-bots can drastically change.

Military themselves convinced that bots — not evil but good. Namely, they allow endangered life fighter, which on the battlefield completely and one hundred percent of a new car. In addition, as a result of use of production of autonomous systems will allow to save significant money on insurance, medical care, security, training and retraining of military prof. With the massive arrival of combat bots armed forces will be significantly reduced, focusing on only the training of technicians and operators.

Loss of bots in the war can make the creation of new systems to replace incapacitated — to provide the same frisky «creation» fighter in the right quantities is simply unrealistic. In the end, according to the military specifically bots and mechanized weapons will significantly reduce collateral damage and the number of victims in the midst of the civilian population. So, the U.S. Defense Department believe that the chain robotozirovannyh systems is a weak link specific people — the operator can not sleep, get sick or weary that many times increases the possibility of a fatal error.

With all of this in the introduction of modern systems bots intelligence, surveillance and reconnaissance devices such as sensors highest precision, massive radar, cameras, sonar, laser equipment, precision mechanisms, will allow to exclude the possibility, for example, a shell hit the bomb or not in the object which was initially scheduled. «Own» bots are not touched — the system will not allow «friend or foe» which by the way, is already embedded in the equipment developed by «soldier of the future» in the U.S., France, Germany and Russia. In short, the standard military battle bots are robots in the distant future.

Prerequisites to the creation of completely autonomous systems, according to HRW, is already a huge amount. Thus, modern SAMs can act one hundred percent automatic mode. For example, qualities such complexes owned South American Patriot, Russian S-400 or Israeli «metal dome.» In July 2010, South Korea began to be placed on the border with North Korea patrol boats capable of conducting surveillance of border areas, detect intruders and, with the approval of the operator, to open fire on them.

In the United States is developing a UAV deck X-47B, which, though not without the help of others to use weapons, but without human intervention will be able to refuel in the air and create refuel other aircraft, including manned and also go to the landing on the deck of an aircraft carrier , soar, conduct reconnaissance and identify targets. And all this he will make faster and more precisely man.

In fact, scientists and manufacturers have only a robust learning system with artificial intelligence, which would combine existing technologies. Much less that the artificial neural network without the aid of others have learned to recognize the images of cats, the human face and body parts.

And the reason in favor of the creation of one hundred percent autonomous systems by the views of the military, there are more than against. For example, it is assumed that the promising South American sixth generation fighter will be unmanned hypersonic. Taking into account that at flight speeds of more than 5 thousand kilometers per hour and bolshennom volume of incoming data (and their drones already given more than capable of handling people), the operator simply fails to fit to take the necessary situational solutions, it appears that this work will shift the «shoulders» artificial intelligence.

But with all this remains an open question of principle: is it ethical to transfer the right to make decisions on life and death battle in the criteria artificial mechanism? After all, like it or not, the killing of man by man looks more logical action (or at least last more sensually justified) than the killing of a human machine. In the first case, the murder will be some rational explanation priiskat; in the 2nd, and the truth is fit to talk about the coming of the Apocalypse.

Vasily Sychev

Like this post? Please share to your friends: