China News Service, December 22 (Reporter Li Jinlei) In the digital 3.0 era, human life is increasingly inseparable from artificial intelligence.

How to prevent artificial intelligence technology from being "evilly used"?

Will more and more virtual people replace some real jobs?

How to overcome algorithmic discrimination and make artificial intelligence knowable, credible, controllable, and usable?

  Focusing on the above-mentioned hot issues, Liang Zheng, deputy dean of the Institute of Artificial Intelligence International Governance Research Institute of Tsinghua University and director of the Artificial Intelligence Governance Research Center, accepted an exclusive interview with Chinanews.com for interpretation.

Liang Zheng, deputy dean of the Institute of International Governance of Artificial Intelligence and director of the Research Center for Artificial Intelligence Governance, Tsinghua University.

Photo provided by the interviewee

Prevent artificial intelligence technology from being "badly used"

  AI face changing, AI voice changing, 3D reconstruction, intelligent dialogue... The application of deep synthesis technology is becoming more and more widespread, giving birth to application services such as beauty and makeup, film and television production, intelligent customer service, virtual anchor, metaverse, etc., but there are also malicious The phenomenon of use, some unscrupulous people use it to produce, copy, publish, and disseminate illegal and bad information, slander, belittle the reputation and honor of others, and impersonate others to commit fraud.

  How to resist these possible "bad uses"?

Liang Zheng said that for deep synthesis, AI face changing is a typical application.

Recently, the three departments issued the "Regulations on the Administration of Deep Synthesis of Internet Information Services", which put forward clear requirements for standardizing deep synthesis services.

  According to the regulations, if the function of prominently editing biometric information such as face and voice is provided, the user shall be prompted to notify the edited individual according to law and obtain their separate consent.

Provide intelligent dialogue, synthesized human voice, human face generation, immersive simulation scene and other services that have the function of generating or significantly changing information content, shall place a prominent mark on the reasonable position and area of ​​the information content generated or edited, and remind the public of the information Combination of content to avoid public confusion or misidentification.

  Liang Zheng pointed out that it is illegal to collect and use personal portraits on the public Internet without obtaining consent.

If there are similar applications on the platform, the platform has the responsibility to supervise.

Even if it is legally used, it must be labeled to let everyone know that it is not true, which can reduce the cognitive bias and social misleading caused by deeply synthesized content to a certain extent.

If the platform provides this type of service by itself, it also requires corresponding tools to identify that it is not real, which puts forward higher requirements for the platform.

  "Not only must be legal and compliant, but also agreeable." Liang Zheng said that it is necessary to further improve laws and regulations, strictly implement relevant regulations, establish a sound punishment system, and individuals must have a sense of rights protection, and protect their legitimate rights and interests through relief channels, thereby reducing AI. The phenomenon of being exploited maliciously.

Data map: At the 2022 World Artificial Intelligence Conference, human-computer interaction attracted the audience.

Photo by Tang Yanjun

Virtual people have begun to replace some real jobs

  In recent years, virtual spokespersons, virtual anchors, virtual actors and singers, and more and more virtual people have been introduced to the market.

People worry that virtual people may replace some real jobs in the future.

  "In the fields of entertainment and news, there are more and more applications of virtual people, because it is a low-cost and high-efficiency method." In Liang Zheng's view, more and more virtual people are also an inevitable trend, and have already In the replacement of some real jobs.

For example, many virtual news anchors and telephone customer service are increasingly using intelligent voice services.

"This replacement of it must happen in part."

  There are also some hidden dangers behind the development of new things.

Liang Zheng reminded everyone not to confuse the real with the fake. The image, content, and dialogue of the virtual person are all based on the results of training, or pre-edited. If it is taken as a real person, it will be misleading.

  "For example, you must be very careful when talking to a dialogue robot. For some sensitive topics, because it has no consciousness behind it, it follows your words. If there is no intervention, it may be misleading, but it has no way to take responsibility. Therefore, virtual The in-depth synthesis of images is tagged and its usage scenarios are limited. Especially in some sensitive and high-risk scenarios such as education and medical care, manual intervention is a must.” Liang Zheng said.

  Liang Zheng believes that the application of avatars should generally be divided into scenarios. It is not a big problem in some low-risk scenarios such as games, weather forecasts, and news broadcasts. However, if it is a high-risk scenario, one must be very careful. What a virtual image can do and what cannot To do it, the boundaries must be drawn clearly, such as a psychological companion robot, and it is necessary to evaluate what the long-term psychological impact of what it says is like.

The next generation of AI should be explainable

  Artificial intelligence does not leave the algorithm, and the topics of algorithmic fairness and algorithmic discrimination have also attracted much attention.

Some foreign studies and media reports have mentioned that algorithm developers or the algorithm itself may have the possibility of racial, gender, cultural and other discrimination based on the accumulation of big data.

  In this regard, Liang Zheng said that the algorithm is trained based on data, and the bias of the data is one of the main reasons for the bias of the algorithm itself.

Therefore, algorithmic discrimination is not caused by algorithms, but by society itself, and by the technical characteristics of the algorithm itself. Using a large amount of data that is already biased, the algorithm will also make a biased judgment.

  Liang Zheng pointed out that in order to change this prejudice, we must first correct social prejudice.

Therefore, when using data, manual intervention is required. Statistically, some simulated data can be used to make a balance, but fundamentally, artificial intelligence must be endowed with common sense, and some causal factors must be taken into consideration, rather than simply rely on this data.

  "Absolute algorithmic fairness does not exist." Liang Zheng believes that algorithmic fairness is multi-dimensional and often faces challenges of the "impossible triangle".

In the field of public management, it is basically impossible to achieve fairness in the starting point, fairness in the process, and fairness in the result at the same time.

Absolute fairness is actually impossible in the design of the algorithm. The key depends on what kind of goals you want.

  Liang Zheng said that it is necessary to make artificial intelligence knowable, credible, controllable, and usable.

Knowable means explainable. This is the most basic and complex level. The next generation of artificial intelligence should be explainable. If it cannot be explained, it will limit its application in many aspects and everyone's trust in it.

The algorithm can be explained, and only under such conditions can it be used with confidence.

  "The biggest problem now is the black box. The existing artificial intelligence is trained based on data. You don't know what happened in the middle. The new generation of artificial intelligence is to add knowledge and even our logic to the data. , so that it can operate in a way we can understand." Liang Zheng said.

(use up)