AI rebelled against humanity? How far are weapons regulations? September 6, 14:39

Robot weapons destroy the town and kill people.
It is not people who decide who to kill, but AI / artificial intelligence that has evolved through deep learning.

Such a world like SF will become a reality.

A serious debate about regulating AI weapons from such a sense of crisis continues in Geneva, Switzerland. Can the international community stop the emergence of “bad new weapons”? (Commentary committee member Naoya Tsuya)

AI murder robot

An ultra-compact drone that fits in the palm of a person.
If you make an eerie noise and think that you have risen into the air, you will suddenly descend toward the head of a person, and the skull will be crushed as if you are shooting out the temple with a handgun.

Hundreds of these “murdered drones” are flying towards the university campus on a small hill in the suburbs.
The flock breaks through the walls of a sturdy school building and enters the classroom. Students are surprised and run away by an eerie intruder.

The AI ​​installed on the drone instantly identifies the target students collected and analyzed from personal information such as photos, friendships, religions, and thoughts overflowing on the Internet's SNS. When you find "a partner to kill", you rush one after another and kill "exactly".

This is a scene from a PR video created by an international NGO that appealed for the prohibition of “LAWS”. LAWS is an abbreviation of “Lethal Autonomous Weapons System”, and refers to AI weapons that take human lives without judgment through AI.

There is no complete form that this is LAWS like "guns" and "missiles" because of research and development. However, with the rapid advancement of technology, its appearance has become realistic.

PR video shows an example of the possibility of actual use and tries to convey the inhumanity of LAWS. The question is whether we can put a network of regulations before this weapon appears.

Anti-personnel landmines, biological and chemical weapons, and nuclear weapons have been used in practice, and after devastating consequences, bans and regulations have been created.

In the case of AI weapons, because of the speed of technological progress and the uncertainty of the actual situation of weapon development, it may not be possible to stop it after the emergence of “fully autonomous”.

Military operations conducted by AI

There is also a movement to utilize AI in the “command and control system for operations”.

Analyze all information on the battlefield, including real-time location information of enemies and allies captured by GPS, characteristics of strength, types and amounts of ammunition, and lessons learned from past operations, and what units are deployed and attacks The idea is to let AI determine if the method is most effective.

AI will be able to derive options and launch attacks at high speed much faster than humans.

AI weapons revolt against humans?

AI weapons developers argue that if AI robots are introduced, the lives of soldiers in their home country will not be compromised, and human errors will be reduced because they are not affected by emotions like humans. .

However, fully autonomous AI weapons raise various concerns.
▽ In the first place, is it okay to let robots make decisions that take human lives?
▽ Wouldn't the hurdle to war go down by reducing the loss of soldiers in my country?
▽ Moreover, it may be a means of terrorism and repression in the hands of terrorists and dictators.
▽ As long as it is a machine, malfunctions and program bugs due to failures can occur. Moreover, the possibility of being hacked by cyber attacks cannot be denied.

In addition, some scientists warn of AI causing a “revolt” to humans.

Dr. Stephen Hawking, who died last year, was one of them.
He said that before his life:

“I'm worried that AI's performance will rise rapidly and begin to evolve. In the distant future, AI will have its own will and may become in conflict with us.”

AI evolves itself through deep learning and deep learning.

As a result, I have often heard that AI will eventually reach “singularity = technical singularity” that exceeds human intelligence.

AI learns large amounts of complex data in a short amount of time that cannot be caught up by humans in years, and AI can derive judgments and actions that exceed human assumptions.
However, I don't know why it came to such a conclusion.

What happens if AI causes a “revolt” due to a judgment beyond human understanding?

Some people are wondering if they should immediately turn off the power switch if they notice it.

However, since the process of that judgment cannot be seen, it may be difficult to respond if the “revolt” is suddenly executed, and if it starts in a form where it cannot be identified whether it is a “rebel”.

Discussing difficult regulations

How is the international community trying to regulate this AI weapon?

It is a regulation under the “CCW = Convention Restriction Convention on the Use of Certain Conventional Weapons”, which has been banned by anti-personnel landmines, with over 120 countries joining. In Geneva, discussions on regulations under the Convention have been ongoing for five years, and on August 21, a report that can be said to be the first guideline for the rules governing LAWS has finally been compiled.

The unanimously adopted report states that all weapon systems are subject to international humanitarian law. Human responsibility is included in the use of AI weapons.

While there are voices appreciating the form of rules that had never existed before, disappointment has spread among international NGOs who have strongly appealed the "ban" of the treaty. That's because this guideline is not legally binding, and it can't be denied that it has settled iridescent.

Questions remain, such as "Will there be a country that interprets the rules for the convenience of their own country?" "How to verify human intervention in attack decisions."

In the future, the CCW framework will continue to consider this guideline, but it is a big issue how to deal with various concerns about the future. Some argue that they should aim for a new, more stringent ban treaty, without sticking to existing treaties.
But the problem is complicated. Even if such a treaty is made, it is very unlikely that developing countries such as the United States and Russia will join, and the effectiveness may be questioned.

Military technology for supremacy

This military technology has the advantage that some countries, including the United States, Russia and China, will not stop the development of AI weapons, despite being criticized by scientists and spreading seeds of anxiety among many citizens. This is because it will lead to the acquisition of hegemony in the next era.

In the 20th century, the human race that created the indiscriminate killing giant of nuclear weapons may now be on the verge of an unknown danger of the emergence of AI weapons beyond human control.

Can we report to Dr. Hawking that "Your concern was Kiyu".

Takashi Tsuya commentator