— Artificial intelligence is being used more and more widely, and legal regulation is not keeping up with the speed of technology development — you spoke about this recently in your speech during the Science Café organized by the Andrey Melnichenko Foundation. What are the fundamental features of AI that dictate the need to develop separate laws and rules?

"The main difference between artificial intelligence and other technologies is its ability to learn, which was previously the prerogative of only biological beings. And at the same time, AI is able to process huge amounts of information very quickly.

Judging by the range of tasks that AI is capable of solving, this technology is indeed the basis for a new stage of scientific and technological progress. First of all, AI is aimed at solving the problems of automating routine operations, its use increases the efficiency of work.

In addition, today there is hope that AI will also improve the quality of decision-making, neural networks can often find relationships in huge amounts of data that cannot be analyzed by humans. Like any other technology, it's a double-edged sword as long as the implications of AI aren't fully understood.

Therefore, lawmaking lags behind the pace of technological development, and now it is not even clear whether the regulation of AI should be singled out as a separate set of rules, or whether a number of general legal and ethical norms should be extended to this area.

  • Tesla car
  • globallookpress.com
  • © Keystone Press Agency / Cfoto

— In 2018, a scandal broke out in the United States — a Tesla car controlled by autopilot got into an accident, its owner died. Relatives blamed the manufacturing company for the tragedy, but Tesla refused to pay compensation, saying that the driver still had to keep an eye on the road. When a legal consensus is reached, how should such disputes be resolved?

— So far, there is no unified position on this issue in the legal community. Elon Musk drew an analogy between self-driving cars and elevators – if an elevator breaks down, then no one sues its manufacturer. Claims are made against the installers of the elevator and the company that maintains it. At the same time, the Swedish automaker Volvo said that it takes full responsibility for accidents involving unmanned vehicles.

But it is clear that in any case, unmanned technologies are at particular risk, especially given the likelihood of a hacker attack on such a system. So far, there are no uniform rules, they are only being formed, but in general, there is a tendency for a detailed investigation of each such accident.

— The issue of copyright in connection with AI raises serious legal disputes. For example, the developer ChatGPT was recently sued by writers, among them George R.R. Martin, because their books were used to train a neural network. At the same time, there is often an argument that a person needs a cultural basis to create a book or a painting, an acquaintance with the works of great authors. How should this conflict of laws be resolved? And who owns the copyright to the image created by the neural network?

"When we evaluate any artifact in terms of whether it is art or not, we are largely guided by what goals the author pursued. Generative neural networks can create original objects, but in collaboration with a human. Therefore, it is appropriate to talk about the authors' "collective" here. The source of emotional rethinking of the world, the source of meanings that predetermine visualization, for example, or a verbal image, is still a person. And AI doesn't have its own intention to convey something to the viewer or reader, it's just a tool. Accordingly, the tool cannot be subject to a right or copyright infringement.

  • Neural Network - Image Generator
  • Gettyimages.ru
  • © CentralITAlliance

For example, we want to attribute the authorship and corresponding rights of a neural network to a computer program. However, according to the Civil Code of the Russian Federation, only a person can be an author. That is, formally, there is already an answer to the question. However, it is still necessary to figure out who are the owners of the rights to the images, music, and texts created by the neural network – the creators of the computer program or the users who use it as a tool? There is no answer to this question, but the use of generated objects can bring profit - who should own it? There is no legal clarity in this area, which is why I say that the legal sphere is seriously lagging behind the pace of development of new machine learning technologies.

— And in the case of the generation of deepfakes and false information in general, is the responsibility already on the distributor of this content?

— Yes, if he does not specifically indicate that it is a deepfake, he will pass off the fake as a genuine video or real news.

In Russia, the idea of a "social rating" is rejected by many people. At the same time, social and even psychological scoring based on a digital footprint is one of the areas of application of AI. Is this ethically and legally permissible? Can this be equated to spying on a person?

— We can consider this issue on the example of the education system. In today's dynamic world, a person must learn constantly, we are talking not only about formal education, but also about self-education, various courses, etc. AI can help build an individual educational trajectory of a person, as well as form his educational "portrait". In Russia, since 2021, a verified applicant's portfolio has already been used, a service that digitally summarizes educational, sports, and creative achievements during the period of study at school.

  • Students in the classroom
  • Gettyimages.ru
  • © skynesher

The problem is that the criteria for evaluating AI in some cases are very opaque, and the person evaluated by the neural network cannot ask why it decided the way it did. This, of course, worries people.

In addition, social differentiation is worrying, which is also a dangerous moment. For example, when a mass standardized assessment is carried out, we see what educational results look like in relation to other pupils, students, etc. And the neural network will evaluate a person in isolation from the social context. And in the long run, such an approach can lead to the disintegration of the integrity of society. Plato also had an image of a cave where people see only the shadows of objects on the wall, but they cannot come out of it and see the real world. The image of Plato's cave in relation to digital culture was once used by the late Vladimir Vasilyevich Mironov, Dean of the Faculty of Philosophy.

Another problem is whether a person in such a system will have an open future, or will it be completely determined by his past actions and results?

In addition, AI is changing the methods of solving both psychological and sociological problems, it allows us to move from the practice of asking questions to trying to see behind the standard actions of a person his real thoughts and desires. So far, this has a relatively innocuous manifestation in the form of targeted advertising, but as the number of markers by which human behavior will be tracked grows, the depth of this monitoring will increase. And this will make people more vulnerable to propaganda, to various kinds of manipulations, since everyone will be able to be presented with information individually, counting on their own peculiarities of perception. And in such conditions, the level of criticality of information perception will fall. So there are a lot of risks here.

  • A person's digital footprint
  • Gettyimages.ru
  • © Jackie Niam

— There is a lot of talk about "big data" as a valuable resource. It is alleged that this data is collected, then resold and processed in an anonymized form. Should a person have the right to exclude their data from this "circulation"?

"It all depends on how you interpret the gold standard of ethics, i.e. the obligation to obtain a person's voluntary informed consent for any manipulation of him. Voluntariness means the absence of coercion, deception and pressure. People voluntarily use the Internet and neural networks, no one forces them. In terms of awareness, people in general have also known for a long time that their actions in the digital space are being tracked. It remains to be determined what degree of awareness of a person guarantees complete voluntariness in this case. If a person is ill-informed, can his actions be interpreted as a voluntary choice?

And here we can draw an analogy with scientific experiments, for example, psychological experiments, the participants of which are also not given full information about the essence of the upcoming tests. These people just know they're taking part in an experiment. In the same way, by immersing ourselves in the digital environment, we actually agree to become participants in a large experiment. It just takes time for people to realize the extent of their involvement in digital tracking technologies.

— In the sensational series Black Mirror, one of the episodes was devoted to the story of a widow who bought a "digital twin" of her late husband — all her husband's notes in social networks, diaries, etc., were uploaded to the robot, and the neural network generated speech based on this content. Today, this technology is already in use, you can order the creation of a digital "avatar" of yourself or another person. How ethical is it?

"There is an opinion that virtual reality technologies can be used to help people experiencing post-traumatic stress disorders. For example, such an experiment was conducted in South Korea, when a mother who had lost her daughter was given a "meeting" with a virtual avatar of her child. Perhaps this procedure was another element of the woman's experience of loss. The question is, will this lessen the pain of loss or exacerbate it? It's very individual. In addition, you need to take into account cultural peculiarities – for example, in Korea there are completely different mourning traditions than in Russia. The Confucian tradition involves active "communication" with the spirits of deceased relatives, which we do not have. And new technologies often simply offer some analogues of the manifestations already existing in the cultural tradition. Therefore, from an ethical point of view, first of all, such things should be created individually for specific people, and not just for entertainment.

  • Digital avatar of Vladimir Zhirinovsky
  • RIA Novosti

With thoughtless mass adoption, we run the risk that instead of psychological help, such technologies will cause addiction in some people, and it will be difficult for a person to return to a reality where his loved one is no longer there.

— The phenomenon of anthropomorphization of neural networks by humans appeared a long time ago — such emotions were evoked by the Eliza program, created in 1966, which was designed for psychotherapy. What social consequences can such humanization of neural networks have?

The fact is that so far even experts have not been able to agree on what consciousness and reason are, neither psychologists nor philosophers. Therefore, people are guided by everyday ideas in this matter: if a program draws and writes like a person, then it means that it is similar to him. In 2013, a good film "Her" was released on this topic, where the main character fell in love with the operating system, artificial intelligence, which "communicated" with a female voice.

This is a good illustration of how humans can become addicted to AI. Moreover, all commercial products are created with the expectation that users spend as much time as possible with them. In today's world, there is a fierce struggle for people's attention, the longer you sit in a certain social network, the more advertising it can show you and earn. And when anthropomorphic machines are created that mimic human manifestations, it's hard for us to resist — that's how our psyche works. Such software products give a person a false sense of replacing traditional social connections. However, while we are tracking this, it is a question whether new generations will be able to track the border between surrogate and real socialization.

— Speaking of AI, it is difficult not to touch upon the topic of weapons: neural networks are increasingly used in attack drones and other types of weapons by all countries. What ethical issues are there in this regard?

"The most discussed topic in this context is unmanned lethal weapons, and this is what causes the greatest concern. It's not just automating actions that are controlled by humans. This is a situation where autonomous devices do not depend on humans to choose a target and make their own decisions about the strike, which raises a lot of serious questions. It is very important that there is a legal regulation of this area. If used correctly, autonomous weapons could minimize the number of casualties among combatants and civilians, so this is also a medal with two sides. There is a position according to which we should not exclude a person from the procedures of moral decision-making, and striking a blow is precisely a moral choice. In my opinion, the prohibitive legal trend should prevail with regard to such weapons.

  • Social Media
  • Gettyimages.ru
  • © SDI Productions

— How to maintain a balance between the personal rights of citizens and the development of the IT industry? And is there such a contradiction at all, or is it a far-fetched argument?

"As with many other technologies, with AI we need to be able to go between Scylla and Charybdis. On the one hand, excessive control and bureaucracy do not slow down research in this area. On the other hand, it is necessary to minimize the risks entailed by the uncontrolled introduction of certain technologies. This is the purpose and meaning of legal and ethical regulation. The examples that we have analysed show that it is hardly possible to develop universal rules for the entire digital industry without taking into account specific specifics. In each case, a thorough ethical review is necessary, as is already the case in the field of biomedical research. Each medical research is approved by the ethics committee of an educational institution, and in the case of AI, we need to build a similar mechanism for social and humanitarian expertise.