The rule of robots is at the door, and it is inevitably coming to help humanity advance to another stage in the march of human civilization, and it has already begun to take its place in our lives.

There are now robots that help deliver food, deliver orders, harvest fields, and perform many tasks in all areas of life.

In fact, this is only the beginning, what we will witness in the coming years is much more than the ordinary imagination can imagine.

The global robotics market is expected to reach $10.47 billion by 2027, with a growth rate of more than 43.4 percent during the forecast period 2021-2027, according to what the “energysiren” platform recently reported.

With this steady growth, many people have many doubts about the level of trust in these robots, and is it possible to build trust between humans and robots or various artificial intelligence systems and programs that help employees and workers to accomplish their various tasks?

This is what a professor working at the American University of Georgia is trying to answer, with the help of the US military, which seeks to build trust and bridge the gap between humans and machines, according to the university's platform.

Trust is a matter of life or death

Professor Aaron Schecter, assistant professor in the Department of Management Information Systems at Terry College at the University of Georgia, has received two grants from the US Army, worth nearly $2 million, to study human-robot interactions.

While AI in the home can help with ordering groceries or meals, AI on the battlefield operates under a more dangerous set of circumstances, where cooperation and trust between soldiers working in the same squad may be a matter of life or death.

In this context, Schekter says: "In the army, and during the battle on the field, they seek to have robots capable of cooperating and performing tasks that reduce the burden on the soldiers, and at the same time gain the trust of these soldiers... The goal is to build trust between the two parties."

While the famous Terminator movie is the first thing that might come to mind when talking about military robots, Schecter explains that work is underway to develop robots capable of carrying out advanced exploration missions, or transporting heavy loads instead of foot soldiers who find themselves Often they carry about 80 pounds of equipment on their backs.

In one of Schecter's team projects, people were more confident in algorithmic advice than human advice (Anatolia)

“Imagine a non-remote-controlled drone, flying over the soldiers like a pet bird, watching the road ahead, and providing real-time audio advice and advice such as: the road to your right is bumpy, dangerous and ambush, I recommend taking the road to the left,” Schekter says.

"We don't want people to hate, resent or ignore robots, but rather they have to be willing to trust them in life and death situations in order to be effective. So how do we get people to trust robots? How do we get people to trust AI?"

To answer this important question, Professor Rick Watson - a colleague of Professor Schecter with whom he previously co-authored books and research specialized in artificial intelligence - believes that "studying how machines and humans work together will be more important in the future, especially with the development of artificial intelligence to advanced and unprecedented levels." ".

The limits of artificial intelligence

"I think we're going to see a lot of new applications of AI," Watson said, "and we'll need to know when it works well, and of course we can avoid situations where this intelligence is dangerous to humans, or at least avoid those situations where it's hard to justify a decision made by robots." It was a wrong decision from a human point of view. There are limits to artificial intelligence, and we have to be fully aware of that, and deal with it."

Understanding when AI systems and robots work well led Schecter to take what he knows about the work of human teams, and apply it to the working dynamics of humans and robots.

“The research I do is not focused on the design and construction of robots, but more on the psychological aspect: When can we trust something? What mechanisms is trust built upon? How do we get humans to cooperate in this area? We are very tolerant of mistakes Humans, but if a robot makes a mistake, can you forgive it?”

Schecter first gathered information about when people were most likely to take the bot's advice.

Then, on a range of projects funded by the Army Research Office, he analyzed how humans dealt with advice from machines, and compared it to how they dealt with advice from other people.

One of the tests is for the robot to watch its companion, guess its task, and then help or hinder it based on its own goals (Getty Images)

Rely on algorithms

In one of the research projects, Schecter's team presented specific topics for testing, such as plotting the shortest path between two points on a map. The team found that people were more confident in advice from the algorithm than advice from humans.

In another study, his team found evidence that humans may rely on algorithms for other tasks, such as solving word associations or brainstorming a problem, analyzing all its dimensions, its future implications, and the best way to solve it.

Schecter commented on these findings by saying, "We are looking at ways in which algorithm or artificial intelligence can affect human decision-making. We tested a variety of tasks in an effort to discover when people rely more on algorithms than on humans. We found that when humans perform analytical tasks, they rely more on algorithms and computers, and interestingly, this pattern has spread to other activities."

"This is a very important area for building trust between humans and robots," he adds.

The global robotics market is expected to reach $10.47 billion in 2027 (Getty Images)

Can robots acquire social skills that help build trust between themselves and humans?

This is what a group of researchers at the American Massachusetts University of Technology tried to answer by designing a new machine-learning system that helps robots understand and conduct certain social interactions with humans, enabling machines to understand what it means to help or hinder each other, and learn to perform these social behaviors. On its own, according to the university platform that published the research.

The researchers designed a special simulated environment in which the robot monitors its companion, guesses the task it wants to accomplish, and then helps or hinders that other robot based on its own goals.

The researchers proved that their model was able to create realistic and predictable social interactions, and then the work team showed special videos showing the interaction of robots with each other in front of humans, where human viewers agreed on the validity of the social behaviors shown by the robots.

Enabling robots to demonstrate social skills could lead to smoother and more positive human-robot interactions in the future.

In this context, Professor Boris Katz, a professor at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, says, “Robots will live in our world very soon, and they urgently need to learn how to communicate with us on human terms.. This is very early work and we are barely We're scratching the surface, but I feel this is the first serious attempt to understand what it means for humans and machines to interact socially."

There is no doubt that robots are inevitably coming, and they will change everything in our lives, including our behavior and social view of ourselves first and life secondly, in light of a world that has become digital in everything, even in our human feelings.