- Igor Stanislavovich, the topic of artificial intelligence has long migrated from science fiction films into everyday reality. However, not everyone knows what AI is. What is it technically, mathematically, if it can be formulated in words that are understandable for non-specialists?

- They talk about artificial intelligence and have been developing it for a long time, already about 60 years. I graduated from the Faculty of Mechanics and Mathematics of Moscow State University in 1983 and then began working in the department of artificial intelligence of the Computing Center of the Academy of Sciences of the USSR. Even then, in the Soviet Union, there was a Council on Artificial Intelligence under the State Committee on Science and Technology. The council was headed by the head of our AI department, academician Germogen Sergeevich Pospelov, a combat general who was engaged in artificial intelligence during the Great Patriotic War - automatic landing of aircraft at the airfield. In the 1980s, our Computing Center of the Academy of Sciences developed speech recognition and synthesis, text analysis, spelling checker, antiviruses, face recognition, music synthesis and other AI systems.

We are now seeing a typical "hype bubble" around AI.

There have already been three or four such bubbles around AI, since the 60s, when it seemed that intelligent home robots were about to appear, artificial judges would be judged, artificial doctors treated, etc.

And now, probably, already the fifth wave of such hype.

To understand what AI is, one must first of all reject the Hollywood idea of ​​artificial intelligence as some kind of self-aware robots who want to either destroy humanity, or, conversely, will be our friends, indistinguishable from us.

In fact, from a developer's point of view, AI is a collection of optimization and machine learning techniques that enable a computer to mimic some of the cognitive functions of a person.

  • Igor Ashmanov

  • © Photo courtesy of the press service

- In 2014, the AI ​​passed the Turing test for the first time.

However, before starting to think in the human sense of the word, algorithms are still very far away.

Is this possible in principle?

Or we, our brains, and computers are generally in different dimensions, figuratively speaking?

- Yes, this is far from it.

The brain has nothing to do with a computer, and computers are unlikely to be made to resemble a brain, no matter what AI “evangelists” may say.

In general, AI in public discourse, in popular articles and speeches of "evangelists" is divided into two main categories: "strong" and "weak".

“Strong AI” is such an imaginary entity, the dream of science fiction writers, an artificial intelligence equal to or superior to the human mind.

He does not exist, but fans of the technological religion believe that he will one day appear and "change the world."

Weak AI is intended for narrow applications.

It has already been developed and used in everyday life.

Dozens of such programs are already running in the smartphone we are used to.

This includes speech recognition, and fingerprint recognition, and spelling check, and selection of T9 words, and face recognition, and recognition of landscape elements in a camera, etc.

Or, for example, a car navigator is an extremely complex program, also "weak" artificial intelligence.

These are complex programs that seem familiar to the user, simple and uninteresting, because they are at hand and are already working.

Such an AI already solves many intellectual tasks - and often better than a human, but there are also those that have not yet been learned to solve with good quality.

And the real "conversational" artificial intelligence, which would confidently pass the dialogue Turing test, has not yet been created, although this has been written about in the news more than once.

So far, "strong" artificial intelligence, having consciousness and thinking independently, is a Hollywood fairy tale.

Most likely, he will remain with them.

In fact, we do not know what human consciousness is.

And we cannot even determine whether our interlocutor has consciousness, we do not have the means for this.

That is why Turing suggested that the analogue of human intelligence is a machine that can deceive experts by impersonating a person, communicating virtually over the network.

On the other hand, now on the Internet, virtual interlocutors massively "pass" the Turing test when people call technical support or a contact center.

People often confuse a robot and a real operator, but they are not experts, this is not a formal test situation.

- Does AI have a “smart ceiling”?

For example, a machine can solve any mathematical problem, but can it pose it?

- No, the machine cannot decide

any math problem.

The machine can calculate the calculation task prepared for it.

There is, of course, a special discipline "automatic theorem proving", but this is not about independent solution of mathematical problems, this is also a narrow, "sharpened" application.

A machine usually has a very narrow application, for example, in theory, it can be taught to recognize certain types of cancer on X-rays better than a diagnostic doctor does, because a machine can show millions of images - as many as a regular doctor cannot see in an entire own life.

Although so far studies show that even this goal has not been achieved.

In some cases, the ceiling for machine performance may be higher than that of a human.

But to independently solve complex problems, or even more so to pose them - this is not even close, and will not be, I think.

  • AGN "Moscow"

  • © Avilov Alexander

- How does artificial intelligence and neural networks learn?

- He learns from data.

Mostly on pre-labeled data.

The most popular AI technology right now is neural networks.

In ten years, there will probably be something different.

The explosion in popularity of neural networks began about ten years ago, although in general they are already about 40 years old.

The very name "neural" is purely marketing, there are no neurons inside, of course.

Neural networks are simply probability matrices, rectangular tables of probabilities, through which data is pushed, multiplied by these probabilities.

Some coefficients by which the data vectors are multiplied.

This process can be thought of as a meat grinder with many different dampers with holes, flanges through which raw "pasta" data is squeezed out. You need the "pasta" at the outlet to be of a certain shape, and you put a certain flange on the meat grinder. In fact, this is what a neural network does with data - it transforms it in a certain way.

The data goes through matrices, multiplies by coefficients and acquires the desired properties.

But in order to understand exactly what holes are needed - that is, the coefficients - you can, figuratively speaking, first take, on the contrary, ready-made, dry "pasta" (marked data), the shape of which suits us, and a raw clay flange.

And according to their model, form holes in the future ceramic flange, passing ready-made solid "pasta" through it.

And then burn the resulting flange with the necessary holes to a solid state - and drive the next, raw data through it.

Those neural networks that AI developers use are first formed using already labeled, processed data.

This is called machine learning.

- You have already mentioned machine translation, and how has it made such progress in recent years, how do such translators learn?

- The idea is very simple.

Humanity, including on the Internet, has accumulated a lot of so-called parallel texts, for example, the Bible has been translated into all languages ​​of the world.

In order for the neural network to learn on them, pairs of texts are split into pieces and "aligned" in order to understand which sentence in one language corresponds to which sentence in another.

And the resulting pairs of "parallel" sentences are loaded into a machine that remembers in special indexes which pieces of text are translated by other pieces of text.

And that's all, then she simply applies this knowledge to "assemble" the translation from the text.

There are, of course, many complex stages of text processing, which I will not describe in detail, but the general idea is this.

  • Legion-Media

- Now AI is delegating more and more tasks.

For example, companies use machines to receive incoming calls on hotlines.

Clients do not like this very much - because it is impossible to convey some non-standard situation to the robot.

In addition, some jobs are lost.

The only winner is the company that has implemented this technology and saved on salaries.

- Certainly.

Only she will not win, this is a very momentary, short-sighted decision, suitable only for a short period.

There are very few examples of well-made chatbots on the web that actually help customers, and the rest serve, in fact, as a way to scare a customer away from the contact center.

In general, there is an interesting phenomenon that the use of AI in different areas often leads to a sharp drop in quality.

Although, in theory, it should be the other way around.

For example, the same machine translation that we talked about. Today, whatever translation agency you turn to, you will hardly be able to get a good “manual” translation. Even if the agency swears by its mother that it does not translate by machine, it will still do so - otherwise it will not survive. First, they translate using Yandex or Google services, and then correct them manually. In this area, the jump, when professional translation became unprofitable without the use of an "electronic guest worker", happened recently. But often the machine translates badly, in a cloth language, without adaptation to the structure of the language. And now these flaws are visible in translations, the structure of a phrase from a foreign language, artifacts of machine work.

At the same time, some publishers generally release entire books translated by Yandex or Google, with little or no editing.

That's horrible.

But nothing, books are still on sale.

That is, people get used to the deteriorated quality.

- Is there a risk that the police, doctors, bankers will rely too much on AI, will stop carefully checking the information themselves?

- Yes, there is a very alarming and growing phenomenon of excessive trust in AI decisions, as well as a drop in qualifications and competencies of those who use AI crutches.

There is also the problem of the finality of AI decisions, to which people have delegated their right to decide.

You cannot understand in any way why it is accepted and how to challenge it.

Let me give you a real example: a person spent two hours filling out a questionnaire for a loan in a large bank.

Finally, I pressed the "send" button - and after 45 seconds I was refused.

Of course, it was not a person who refused to even read the questionnaire in that time, but the AI ​​system.

And in this case, there is nowhere to go to find out the reason, to argue, there is no one to say: "Wait, at least discuss this with me."

No, the decision is final, no explanation.

And not only that, the client was not only denied a loan at this bank, but also this refusal was recorded in his credit history.

- Although the person, perhaps, simply forgot to put a tick somewhere in the questionnaire ...

- Or just the algorithm itself was unusable - how do you know?

It should be understood that although the algorithm, politics, and ethics seem to be deciding, the principles are still put in it by people who can be wrong or be malicious.

However, on the basis of these errors, the machine begins to control people, and its decisions cannot be disputed, they are final.

As a result, a person becomes a powerless slave of such systems.

  • RIA News

  • © Alexey Sukhorukov

- How to solve this problem?

- It is necessary to prohibit artificial intelligence systems to independently make decisions about people of the last resort, as well as to demand that the transparency of the work of AI algorithms be ensured and always explicitly mark any communication with the AI ​​system so that the user understands who he is dealing with.

To this end, we at the Human Rights Council this year wrote a Concept for the Protection of Citizens' Rights in the Digital Environment.

The corresponding instruction was given to the HRC by the President in January 2021, then I assembled a working group of lawyers, HRC members, representatives of various departments and public organizations in the HRC.

By July, we had written a concept and agreed with the government.

Then they gave the document to the presidential administration, now it is being approved by various departments, the Security Council and the FSB.

We believe that in 2022-2023 we need to create, in one form or another, a Digital Code that would protect the rights of citizens in the digital environment and define the rules of behavior in it.

By the way, in China, for example, in the fall of this year, private companies were prohibited from using the collected personal data about people to discriminate against them, for example, price. Large platforms have long tried to charge users different prices for the same product or service based on their income level, as calculated by analyzing user data. So, in China, gigantic fines were imposed for this.

And this is only part of the risks to the rights of citizens in the digital environment, in fact, there are a lot of them, including social ones.

For example, someone, at the request of a woman on the Internet, will find out that she is pregnant, and they will stop hiring her.

Or they will establish that a person is looking for a cure for cancer - and he will immediately begin to be besieged by charlatans who will drive him into the grave with their false means.

In the zone of "digital risks" are the elderly and minors, who are very vulnerable to "scammers".

There are a huge number of vulnerable groups of the population who can become victims of discrimination and deception.

- Artificial intelligence actively reproduces all negative stereotypes that exist in society, there are enough examples.

Why is it impossible to programmatically "disable" such moments from the AI?

To "educate" him in a more moral way?

- Yes, people impose their ethics on AI. For example, let's say you train AI for recruiting. You give him data about what kind of people were hired by your large corporation before, what was their career path, where they came from, what results they gave at work, how many worked in the company on average before they were fired. And so the neural network assimilates the established practice and writes in its probability coefficients that, for example, it is unprofitable to hire blacks, women, the sick, the elderly - that's all, discrimination. This is if you delegated the final decision to the AI. There are already many such examples, in the USA there are regular trials on such cases of discrimination.

Of course, a programmer can set the desired AI policy, just write some rules, introduce quotas for discriminated categories.

But then a backward wave of claims will begin, another person will ask: why did you give a quota for this category, although I am a better fit in terms of qualifications?

Isn't hiring about qualifications?

- This is more of an American story, but we also have, for example, discrimination against older people in employment.

Such moments can somehow be eliminated with the help of AI?

“In fact, the worst thing is not that AI supposedly assimilates human stereotypes, but when it actually calculates that one person is“ less profitable ”than another, based on objective data.

AI considers a person as a commodity, as a vector.

- Sounds in the spirit of dystopias ...

- But in fact, many of those who are actively promoting digitalization, they perceive a person as a vector, a set of parameters.

Especially AI developers.

And someone's parameters are "worse": if the person is elderly, disabled, sick more often, a pregnant woman.

Although, according to the law, it is expressly forbidden to discriminate against people on such criteria.

But how can you prove that you were refused precisely on these grounds?

That is, an audit of AI systems is needed - what criteria are laid down in them?

But we do not yet have any rules for the digital space, here everyone creates what he wants.

And there is no institution of independent expertise - AI policy, personal data flows.

It is not right.

In any developed industry, an institution of independent expertise appears.

  • AGN "Moscow"

  • © Andrey Nikerichev

-

There is another important aspect - the use of AI for face recognition. Earlier, the European Parliament adopted a resolution on the need to introduce in the EU a ban on automatic face recognition in public places. According to MPs, facial recognition systems threaten fundamental rights and freedoms such as privacy. To

what

extent are these fears

substantiated

?

- Of course they are. Because we have not delegated the right to recognize our faces to anyone. Usually, this practice is justified by security considerations, the need to investigate crimes. However, much more data is collected than is necessary for security. The simplest example is the so-called "school shootings" and the participation of cameras on the street in them. More precisely, non-participation: both the "Kazan shooter" and the "Perm" one reached the scene of the crime right down the street, openly holding weapons in their hands. There were bunches of cameras hanging there that could and would probably recognize their faces. But why did they do this, if it was not a face that had to be recognized, but a gun? And such a narrow, but much more useful function could save lives ...

That is, recognizing everyone in a row as if all people are criminals is wrong.

Recognition is needed within a very narrow framework.

We have a law on personal data, which states that it is possible to collect and use personal data of citizens only within the framework of the stated task, and the creation of unified databases for the purposes of “generally”, “to know everything about everyone” is not allowed.

Relatively speaking, you are collecting data on pro-kovidny patients - please, destroy them when these people recover.

And do not transfer this data to either "ecosystems" or other government agencies.

Hopefully, when we create “traffic rules” in the digital realm, “carpet” face recognition will be banned.

Let me remind you that recognition is not just an image fixation, but “attribution”, its correlation with a specific name, surname, passport data.

- Do you need to recognize pointwise, with a specific purpose?

- Yes, for example, if we are talking about people who are already wanted.

Or, for example, something happened in a public place - then you can recognize the participants in the incident retroactively, by recording, within a radius of 500 meters, relatively speaking.

Or, for example, if a person behaves suspiciously - staggers, pester passers-by, carries a weapon, etc.

And not all in a row, as they do now.

I believe that you shouldn't take biometrics in banks either, because this data can just as well fall into the hands of fraudsters, as the databases of phone numbers and full names were previously obtained.

Such bases are simply traded by unscrupulous bank employees.

- I would like to talk about the benefits of AI.

How will the development and implementation of AI affect the economy and social sphere?

Will this be the impetus for the development of these areas?

What are the main benefits of AI?

- Here everything is the same as in the cases with other technical innovations and breakthroughs - cars, airplanes ... If we introduce this area into the framework of the law, introduce rules of behavior, reduce social risks, then AI can bring many benefits in different areas.

This is the diagnosis of diseases, and the search for criminals, management of the economy, transport, housing and communal services, security in the regions.

The industry has high hopes for AI.

  • RIA News

  • © Mikhail Voskresensky

AI is already saving a lot of money in the so-called predictive, that is, predictive analytics.

For example, AI can be taught by example to find defects, aging of metal in structures, mechanisms, rolling mills, blast furnaces, etc.

And to replace spare parts in advance, without leading to a disaster and emergency breakdown of production.

A practical example - in blast furnaces there are special nozzles (lances), which, when melting metal, drive carefully measured volumes of oxygen and water into the blast furnace.

If the nozzle is "tired" and burns out during melting, then a hole is formed through which air enters uncontrollably, the quality of the metal deteriorates.

Hot swapping is not possible.

But if, with the help of sensors and a trained AI, it is determined in advance that the lance will soon burn out - even before the first external signs of this appear, then the replacement can be done before the start of melting and large losses can be prevented.

There are a lot of similar examples of industrial applications of AI.

Examples of the opposite kind can also be cited.

For example, many people suspect that a car navigator often does not give the optimal route for the user, but considers it as a point in the general flow and suggests the trajectory that is optimal for reducing congestion.

This is, in fact, an attempt to control the mass of people using artificial intelligence.

And who is he to rule everyone?

After all, we put this application on a smartphone in the expectation that it will help us, we trust AI, and at a certain moment it begins to decide for us what is best for us.

Or another example.

There is constant talk about the use of AI in medicine.

Let's say you're sick, wearing a bracelet that monitors blood pressure, blood sugar, etc.

However, in the agreement on the use of this bracelet, which no one reads, it may be written that you authorize the transfer of this data to third parties.

Usually, by the way, such a condition is contained there.

And these "persons" can be, for example, a bank, which will now offer you loans at an increased rate.

Or insurance companies that will inflate the cost of insurance for you as a sick person.

As a result, it turns out that we will begin to be controlled by collecting and analyzing our personal data.

In general, many supporters of the widespread introduction of AI are convinced that society can be controlled algorithmically.

They even have a special term - managing the "individual trajectories" of students, patients, citizens in general.

- Sounds like a euphemism.

- Because the "trajectory" of a person -

it is, in fact, his destiny.

This is an attempt to control fate.

Such people try on the role of God.

And this is definitely not allowed.