Face recognition, speech recognition, intelligent algorithm push... At present, artificial intelligence (AI) technology is rapidly integrated into social life. With the vigorous development of the intelligent industry, various massive data are also collected in people's lives. storage and utilization.

Data security and privacy protection issues have thus become serious challenges in the development and application of artificial intelligence systems.

  While enjoying the efficiency and convenience brought by artificial intelligence, how to further tighten the barriers to protect personal privacy and public data security?

At the 11th Wu Wenjun Artificial Intelligence Science and Technology Award Ceremony and the 2021 China Artificial Intelligence Industry Annual Conference held recently, a number of experts and scholars discussed this.

Artificial intelligence data security challenges are many

  According to Pan Yunhe, an academician of the Chinese Academy of Engineering, the six elements of data, computing power, algorithms and knowledge, application, and theory are interdependent, and together constitute a new ecology for the development of artificial intelligence 2.0 in my country.

Among them, the status of data is self-evident.

  However, as artificial intelligence continues to enter people's work and life scenarios, data collection, storage and application are increasingly faced with many risks.

"In a nutshell, data security issues are reflected in five aspects: reproducible, easy to leak, wide-ranging, harmful, and difficult to supervise. This is determined by the characteristics of the data." Dong Guishan, researcher at the 30th Research Institute of China Electronics Technology Group Say.

  In terms of specific hazards, on the one hand, the most common violation is the violation of data confidentiality. For example, in the financial field, the user’s transaction bills and financial management of artificial intelligence system users are stolen through the Internet to understand their personal asset status and online behavior-based payment. willingness, so as to quickly and effectively locate the victims and target groups of financial fraud; or steal a large number of users' fingerprints, iris, face, body shape and other personal privacy information for crime through various network loopholes; there are also some mobile phone application software Illegal and illegal collection of user personal information for commercial purposes.

  On the other hand, there are also violations of data integrity and availability, such as tampering, falsifying and interfering with big data through technical means, and inputting unreal training data into artificial intelligence systems that are put into application to obtain results that are contrary to the goals.

"Using the obtained data, it is also possible to change, splicing, and create highly realistic fake video or audio for video, voice fraud, and even disrupt social order and stability. ” said Wu Shenkuo, assistant to the dean of the Beijing Normal University Internet Development Research Institute and executive director of the International Center for Internet Rule of Law.

  In fact, the threats faced by AI data collection, storage, and utilization not only infringe upon the legitimate rights and interests of citizens, but also detrimental to the development of AI-related industries.

Liu Zhi, a professor at the School of Information Science and Engineering of Shandong University, gave an example. For example, the development of intelligent ultrasound technology, which uses robotic arms to perform ultrasound scans of the thoracic and abdominal cavities, requires a large amount of human ultrasound scan data for the system. However, if the training data is mixed with a lot of Abnormal or irregular samples will greatly affect the final technical effect, which has also become a key factor affecting the implementation of artificial intelligence technology.

Safety legislation and regulation enter the fast lane

  In the intelligent era of resource sharing and information exchange, the law is undoubtedly the red line to tighten the security barriers.

  "Under the challenge, we can see that China has formed a multi-level and multi-dimensional legal and regulatory framework in the field of artificial intelligence data governance." Wu Shenkuo introduced that the Civil Code implemented in recent years has limited rights to portrait rights, voice rights, privacy rights and The protection of virtual property has given clear regulations, and the successive implementation of the Data Security Law, the Network Security Law and the Personal Information Protection Law shows that my country has entered the fast lane in terms of data security legislation, and the "Network Data Security Management Regulations" " was also included in the State Council's legislative work plan this year.

  However, in the opinion of experts, there are still some issues that need to be clarified in terms of legal regulation.

For example, the application value and ethical judgment of data, Liu Zhi gave an example. The application of intelligent medical care often requires a lot of medical data. There are still many uncertainties in the ownership and use rights of these data. It is urgent to further confirm the rights and clarify each The scope of availability of a type of data, so that under the supervision of the rule of law, these big data can benefit the society to the greatest extent.

  "Compared with the past, the evaluation of the application value and ethical judgment of these data is a new requirement that has never appeared, so it is no longer 'technology belongs to technology, law belongs to law', but a process of comprehensive evaluation of value is required. "Wu Shenkuo said.

  According to Hui Zhibin, director and researcher of the Internet Research Center of the Shanghai Academy of Social Sciences, artificial intelligence is a future-oriented technology, and it is necessary to make the technology move forward while ensuring the safety of human beings.

Among them, it is necessary to put people first, but also need to have clear responsibilities. The issues of privacy protection and algorithm fairness require not only legal regulation, but also technical support. All these still require the efforts of all parties and a long way to go.

  In this regard, Qing Yu, deputy director of the Science and Technology Committee of China Electronics Technology Group Co., Ltd., introduced that during the "14th Five-Year Plan", my country has launched a key research and development plan "Cyberspace Security Governance" key special projects to carry out basic research around a series of security challenges.

Among them, in terms of security governance, aiming at the problems of data monopoly, abuse and leakage in cyberspace, the focus is on breaking through core technologies such as the security of important data, the privacy protection of personal data, and the security of cross-border data flow, which may solve the above problems. Provide important solutions.

Legal regulation and standard setting need to be improved urgently

  "It is undeniable that the development of artificial intelligence has brought a series of new legal issues." Wu Shenkuo suggested that in the long run, it is necessary to build a specific and specialized legal system, and even legislate for artificial intelligence.

From the current point of view, for data security problems that have occurred or may arise, on the one hand, we can review and analyze my country's existing relevant legislative norms, amend existing regulations or add new law enforcement guidelines and judicial interpretations to incorporate them; on the other hand , then new special provisions can be introduced to deal with it from the perspectives of international governance and domestic governance by conceiving and designing urgently needed special rules.

  At the same time, Wu Shenkuo believes that when conducting legal regulation, intervention in the industrial field should also abide by the principle of technology neutrality. For security risks that do not exceed the equivalence, control and tolerance of the society, intervention and suppression should not be expanded; legislation should be forward-looking, Pay attention to prominent potential risks, and explore appropriate early interventions, such as the identification and blocking of illegal preparatory behaviors; in the process of legislation, law enforcement, and judicial processes, attention must be paid to realizing the systematic interconnection and application extension of various legal norms.

  According to Chen Tieming, director of the Zhejiang Cyberspace Security Innovation Research Center, the formation of data security standards is also crucial.

"In the final analysis, in order to make the data security of artificial intelligence orderly and standardized, it is necessary not only to classify and grade the data, but also to have a clear organization to standardize the data, and as the technology moves forward, the standards must also be As it continues to grow.”

  Dong Guishan suggested that in the construction of artificial intelligence regulations and data standards, the effective circulation and utilization of data should also be followed up in a timely manner.

"With a good data ecology and data security ecology, powerful artificial intelligence can be nurtured, and data circulation and application in the same industry or across industries can support more complex artificial intelligence training and data transformation requirements, thereby promoting artificial intelligence and The development of digital technology.”

(Reporter Yang Shu of this newspaper)