The use of artificial intelligence in war requires careful consideration of its humanitarian, legal, ethical and security implications (Getty)

Have you lost your trust in humanity? This sentence comes to mind as if it carries a great burden of doubt and questioning. With the escalation of events and tragedies in the world, and with the revelation of bitter truths, our trust in humanity has gradually begun to fade.

In the Gaza Strip, more than 2.3 million people face extermination and hunger. As some governments stop supporting and funding the United Nations Relief and Works Agency for Palestine Refugees (UNRWA), a harsh truth emerges, which is that humanity has deservedly failed man, and the time has come for us to plant our hopes in non-human bodies called today artificial intelligence.

In light of these tragic events, artificial intelligence emerges to us as a controversial topic, as ideas intersect between fear and hope, and the urgent need to understand the nature of artificial intelligence and how it works emerges. Only through this understanding can we know whether he will one day take the initiative to lead humanity towards a bright future, or whether he will be the mechanism that hastens our demise.

Technology or human?

The integration of artificial intelligence into military operations has played a pivotal role in recent conflicts, and has led to a significant rise in the number of civilian casualties.

The militarization of artificial intelligence has serious implications for global security, including the development and deployment of lethal weapons systems that can operate without human intervention, increasing the military dominance of technologically advanced countries over Third World countries.

Destructive conventional weapons are made more intelligent by artificial intelligence, which is used, but not limited to, in analyzing drone footage and other sources of intelligence to identify targets, guided missiles, and advanced spy systems.

This is what Israel did to select and expand its targets in the war on Gaza. It used an artificial intelligence system called “Habsura,” which accelerated the pace of targeting, as the system works to extract large amounts of information from various sources, such as communications data, aircraft footage, and surveillance data, and then analyzes and produces Recommendations for goals.

This transformation raises profound questions about the impact of technical progress on the essence of humanity and its appreciation for life. The use of artificial intelligence in war is a complex issue that requires careful study of its humanitarian, legal, ethical, and security implications.

Which brings back the question about the impact of destructive technology on our humanity. Is it what distorts our humanity, or is the distortion inherent in us and technology is just a mirror that reflects it?

Israel used an artificial intelligence system called “Habsura” that accelerated the pace of targeting in the Gaza war (Getty)

Building an ethical machine

Dr. Paola Ricorti, associate professor at the Berkman Klein Center for Internet and Society at Harvard University, believes that dominant artificial intelligence has become a force capable of committing violence through three cognitive processes: data transformation through extraction and expropriation, algorithmization through mediation and governance, and automation through violence and inequality. And shifting responsibility.

Paula finds that these elaborate cognitive mechanisms lead to the development of global classification systems that reinforce cognitive, economic, social, cultural, and environmental inequalities among the world's different peoples.

Although these matters represent a challenge to humans adopting artificial intelligence, on the other hand, it seems that there is an opportunity for progress and improvement in all aspects of human life in the view of computer scientist and inventor Ray Kurzweil, who is famous for his work in the field of artificial intelligence and his predictions about the future of technology.

Kurzweil - one of the optimists in the future of artificial intelligence - believes that this technology will be the key to confronting the major global challenges that threaten humanity. Kurzweil also believes that merging with this artificial intelligence will open unlimited doors of possibilities, allowing us to overcome the biological limitations that hinder us from enhancing our capabilities. enormously.

Kurzweil believes that through continuous improvements in artificial intelligence, humanity will be able to achieve unprecedented achievements by employing the superior mental capabilities that this type of technology can provide.

There is no doubt that Kurzweil's predictions raise hope for the future of human salvation, but the inevitable equation of increasing human capacity is matched by an increase in damage and the power of destruction. But moral values ​​​​remain the most effective deterrent to avoiding destructive conflicts. Therefore, there must be awareness of what moral values ​​are.

Is there a conscious machine?

Artificial intelligence has witnessed great development in the past years in many areas such as the development of industries, natural language processing, diagnosing diseases, producing treatments, and manufacturing robots. However, these developments are still within the narrow map of artificial intelligence, and the concept of consciousness remains a complex and interesting topic.

The philosopher John Sorel laid the foundation for his intellectual argument called “The Chinese Room” to challenge the claim that a machine that can run a program through certain commands will possess a “mind” or “consciousness” similar to a human. The aim of the experiment is to refute opinions supporting the possibility of the existence of strong and conscious artificial intelligence. .

In the Chinese room, it is assumed that there is a person who does not understand the Chinese language inside a closed room, but through written rules, the person can respond in Chinese so that the person outside the room is certain that the person in the room understands Chinese.

Just as the person inside the room does not know Chinese even if he is able to answer questions in this language, the computer that uses a smart conversation program in Chinese also does not understand the conversation, and deduces its answers based on rules and software that do not give the ability to understand Chinese or impart reason or awareness to the computer. .

In the absence of awareness, current artificial intelligence lacks the ability to lead or eliminate humanity, and the task of developing it for the better is considered difficult.

Artificial intelligence scientist Yan LeCun agrees with this trend, as he believes that artificial intelligence is not even as intelligent as pets, and that current systems are far from reaching some aspects of consciousness that would make them intelligent.

The machine has no awareness of what violence is, nor does it have the reasons to practice it, but man has employed it to serve his ill interests in ways that the machine itself may not be aware that it is practicing violence toward others. If we are afraid of artificial intelligence fighting humans, shouldn't we, in turn, stop this violence?

But even if today's artificial intelligence can gain consciousness, there is something more important that it must have in order to outperform and control humans: It is the motivation.

Machine motives

Motivation, which is expressed in English by the word (motivation), is a Latin word derived from the word (mover) and means preparation for action or movement. It is a physiological process that prepares the organic system for psychological work and satisfies needs, desires, and motives. They are drivers of the organism’s behavior that make it move and be active toward a goal. Depending on the intensity of the motivation, it may make him continue that activity, stop it, or increase its effectiveness.

In his book, “Why Do Nations Fight? Motives for Wars in the Past and Future,” Richard Ned LeBow identifies four basic motives for starting wars between nations. These motives include fear of another threat, economic, political, and strategic interest, the desire to maintain and enhance one’s position, or Revenge injustice. These motives are important factors in shaping international political dynamics today.

Artificial intelligence can have an impact in shaping political contexts that may ultimately lead to conflicts or wars, but it does not have motives in itself, as it is considered merely a technical tool that relies on programming and data to carry out tasks and make decisions.

Although advanced artificial intelligence systems capable of filtering decisions can be developed in military contexts, the final decision remains subject to human will.

American novelist Isaac Osimov established ethical principles for artificial intelligence that are summarized in 3 laws called Osimov’s Law (Getty)

Humanization of the machine

In their relentless pursuit of technology, scientists have tried to make robots more human-like, and have looked for ways to embody the organic movements and behaviors of humans in these machines. Despite the great progress they have achieved, challenges still face them in attempts to humanize robots.

They worked to put human characteristics in her appearance with the aim of bringing her appearance closer to the human appearance, even if they did not serve her function, such as the movement of the eyelashes. But despite the efforts made, robots still lack the accuracy and smoothness in imitating these movements.

Writers and philosophers have made desperate attempts to implant human emotions in robots in their writings, so that they have a beating heart. While these attempts failed to bring sparks of love, anger, and revenge into the interior of these machines, they created a strange classification in the human imagination for these machines between good machines and evil machines.

The development of technology has made great strides, but the robot, which cannot even raise its eyes like humans, remains far from being able to create all those mysterious chemical reactions that generate human emotions.

Osimov's canon

As a matter of novelistic philosophy, I wrote the stories of the American novelist and one of the most famous science commentators, Isaac Osimov; The ethical principles of artificial intelligence, which he summarized in three basic laws:

  • First: It is not permissible for a robot to harm a human being or allow a human being to be harmed.

  • Second: The robot must obey the orders given to it by humans unless these orders conflict with the First Law.

  • Third: The robot must protect its existence as long as this protection does not conflict with the first or second law.

Although Osimov's Law of Robotics are instructions embedded in his stories and not scientific laws, it has been of great importance in discussions of the technology, and has been discussed by many prominent scientists in artificial intelligence, including Ray Kurzweil, founder of iRobots, and Rodney Brooks, Daniel Wilson Robotics.

Today we have begun to witness and feel the actual harm of artificial intelligence, especially in the field of military operations. As these challenges increase, it becomes necessary to adopt a strict legal and ethical framework to control the uses of artificial intelligence and ensure that it is used in a way that ensures justice and respect for the basic rights of individuals. All individuals.

There is no doubt that machines are becoming more and more similar to us, not only in their appearance, but also in the way they think. And here perhaps lies the danger, as it is difficult to distinguish it from us and from our thoughts and desires.

Perhaps the fear of machines in humans is not because of their apparent desire to harm humanity, but because of our attempts to make them more like us. If this indicates anything, it indicates that humans are more afraid of those entities that are similar to them, and which - due to their complexity - may embark on an unpredictable path, either to immortality or to annihilation.

Source: Al Jazeera + websites