On March 1, 2022, the "Regulations on the Administration of Algorithm Recommendations for Internet Information Services" (hereinafter referred to as the "Regulations") jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation came into effect.

In the era of big data economy, algorithms are the core force for personal information processors to collect and process data, push information, and allocate resources.

Once the algorithm is out of order, it will bring serious threats to national interests, social and public interests and the legitimate rights and interests of citizens.

Therefore, the implementation of the "Regulations" has distinct significance of the times and practical needs.

  The algorithm is the core operating logic of the computer, and is the sum of a set of data processing instructions based on the design purpose, which reflects the characteristics of professional technology at the bottom layer.

Algorithms are also an enabling technology, and the application scenario is the field of empowerment. When algorithms are applied to specific business models, application risks will arise.

Algorithmic relationship is such a superposition or combination of "method" and "domain".

Therefore, the algorithm specification cannot ignore the "method". The design, testing, and evaluation of the algorithm belong to scientific and technological activities. The "algorithm black box" and "algorithm hegemony" are partly caused by the irregularity of the scientific and technological activities themselves; the algorithm specification cannot only talk about "" Way”, because it is the endless application scenarios of algorithms that make algorithms realistically affect our rights and interests and affect the free development of people.

The "Regulations" fully grasp the endogenous risks and application risks of algorithms, and design risk prevention rules in a targeted manner.

  First, the control of algorithm endogenous risk focuses on the design and operation stages of the algorithm.

The design, testing, and evaluation of algorithms cannot be done by non-professionals, and this "exclusivity" indicates that algorithmic activity is a specialized technology.

The emergence of chronic diseases such as algorithm black box and algorithm discrimination is also partly due to the complexity of scientific and technological activities.

Therefore, to regulate scientific and technological activities, it is necessary to design special rules from the perspective of scientific and technological risk prevention.

The "Regulations" encourage the use of algorithms to spread positive energy, resist illegal and bad information, and must not set up algorithm models that induce users to indulge in addiction, excessive consumption, etc. that violate ethics and morality, and promote algorithms to be good.

It shows that algorithm research and development as a scientific and technological activity cannot only have instrumental rationality, but must have value rationality.

  At the level of specific rules, the "Regulations" also focus on directly regulating the design and operation of algorithms from a technical perspective.

For example, paragraph 1 of Article 9 requires algorithm recommendation service providers to establish and improve feature databases for identifying illegal and bad information, and improve the standards, rules and procedures for entering the database; Article 10 requires algorithm recommendation service providers to strengthen user models and user models. Tag management; Article 12 encourages algorithm recommendation service providers to comprehensively use strategies such as content de-duplication and disruptive intervention, and optimize the transparency and interpretability of rules such as retrieval, sorting, selection, push, and display.

  It is particularly noteworthy that Article 24 of the "Regulations" stipulates the algorithm filing system, requiring algorithm recommendation service providers to provide information such as algorithm type, algorithm self-assessment report, and content to be published.

Risk assessment and experimental data recording are the first steps in the risk control of scientific and technological activities.

Without risk assessments for algorithmic activities, risk cannot be controlled at the source; without records of algorithm design and testing, regulators cannot effectively assess, trace and verify complex algorithms.

In a certain sense, the filing system forces algorithm recommendation service providers to actively conduct risk assessment and record the whole process, which can not only urge operators to consider algorithm compliance issues during the entire algorithm activity stage, but also help law enforcement agencies supervise algorithm activities.

  Second, the governance of algorithm application risks covers the entire life cycle of algorithm operation.

Algorithms are not only science and technology, but also a means of empowerment. In addition to the inherent risks of technology, algorithms are also embedded in business and public utilities. The continuous impact on the legislative model and governance model originated in the industrial age has triggered many governance models. Pain points.

In the application link, we must also implement procedural control.

However, algorithm application has its particularity different from algorithm development: algorithm application directly affects the algorithm relative to the human.

The application of the algorithm will affect the legitimate rights and interests of the algorithm relative to the human - the algorithm makes decisions based on the demands of the algorithm relative to the human.

In order to solve application-oriented risks, the "Regulations" are equipped with corresponding rules of conduct (Articles 19 to 21) for scenarios such as minors, the elderly, laborers, and big data killing.

In order to achieve bottom-up algorithmic governance, the "Regulations" also grant individuals the right to confront algorithmic decision-making through the path of rights.

  Paragraph 1 of Article 17 of the "Regulations" stipulates: "The algorithm recommendation service provider shall provide users with options that are not tailored to their personal characteristics, or provide users with a convenient option to close the algorithm recommendation service. If the user chooses to close the algorithm recommendation service, Algorithm recommendation service providers should immediately stop providing related services." Compared with the provisions of Article 24, paragraph 2, of the Personal Information Protection Law, this article further clearly stipulates the right to refuse the application of algorithms.

  In addition, Paragraph 2 of Article 17 stipulates: "The algorithmic recommendation service provider shall provide users with the function of selecting or deleting user tags for the algorithmic recommendation service based on their personal characteristics." This provision is the first of its kind in my country and can be more comprehensive protect the interests of algorithms relative to human beings.

Algorithms may not require algorithm users to stop recommending services, but prohibit the presumption of specific types of services.

By giving the algorithm the right to delete tags relative to people, it can more comprehensively meet user requirements.

  At the same time, paragraph 3 of Article 17 of the "Regulations" also stipulates: "If an algorithm recommendation service provider applies an algorithm that has a significant impact on the rights and interests of users, it shall explain it according to the law and assume corresponding responsibilities." For example, in dynamic pricing, if recommending to The user's price is too high, which may constitute a significant impact on the user's rights and interests.

The user can ask the algorithm recommendation service provider to give instructions.

If it constitutes a violation of civil rights and interests, the algorithm recommendation service provider shall bear corresponding responsibilities in accordance with the law.

  The whole-process governance of algorithmic risk seeks new balance points and combination methods under the "risk-regulation" framework.

Algorithms are essentially codes that process data and are an applied science and technology.

But technology is not neutral, especially when algorithms are not dealing with "things" but "personal information", algorithmic activities have both the attributes of social activities and connotation of ethics and social risks.

Therefore, the problem of algorithm specification comes partly from the professionalism and tool nature of technology, and partly from the value complexity of application scenarios.

The "Regulations" design rules for scientific and technological risks and application risks, and create a Chinese plan for algorithmic norms under the framework of "risk-regulation".

  (Author: Lin Huanmin, Lecturer at Guanghua School of Law, Zhejiang University, Researcher at the Institute of Industrial and Informatization Rule of Law)