Multiple parties work together to solve problems such as "information cocoon room", "algorithm discrimination" and "big data killing" -

  Algorithm: Only with "co-governance" can there be "sharing" ("Integration" view of China)

  ——The third series of reports on “Algorithms” in perspective

  Reporter Lu Zehua                       

  Algorithms are a double-edged sword. If used well, they can be an accelerator for the development of digital society; if used improperly, they can also become a destroyer of the network ecosystem.

How to control algorithms and avoid risks such as privacy leakage, "big data familiarity", "algorithm discrimination" and "information cocoon" caused by algorithms are of widespread concern to society.

  Experts believe that algorithm governance practices in recent years have shown that guiding algorithms with correct values ​​requires joint efforts from individuals, practitioners, industry organizations, and management departments.

  personal

  Only those who understand algorithms can use algorithms

  "Hello, I have viewed this short video 3 times and liked it. Please remember me and send me more content like this..." In the comment area of ​​a short video, a product manager from an Internet company in Beijing said Zhu Zhen wrote such a message.

  Although it seemed like a joke, the effect was immediate soon after leaving the message.

She showed it to reporters - and sure enough, many similar videos came across later.

  "In the algorithm mechanism, comments are a big bonus. If you leave a message in the comment area, and the keywords are all 'positive', the platform will get the result of increasing recommendations." Zhu Zhen said, For individual users, if they want to use the algorithm well, they must first understand the algorithm.

Consciously increasing or decreasing browsing time, likes, comments, etc. are all ways for users to "communicate" with the algorithm.

  Zhu Zhen admitted that as a control mechanism "hiding" at the bottom of the platform, the complex and changeable calculation principles of the algorithm are difficult for users to understand.

But we can understand the basic logic of the algorithm: first, record the user's operation behavior; second, analyze whether the operation behavior is "forward" or "reverse"; third, recommend "forward" and avoid "reverse".

If you understand this logic, you can roughly understand how to deal with algorithms.

  Li Huan, a college student majoring in computer science, has some experience in using algorithms.

She has registered large and small accounts on several commonly used online platforms.

  "The large account is used to browse daily information, and the small account is used to process private information. On the small account, I try not to like, post status, or comment to reduce my 'digital traces'. At the same time, I can not log in to the account Don’t log in to avoid privacy leakage as much as possible.”

  Li Huan will also "hide and seek" with the algorithm: "Every time I see content I don't want to see on the homepage, I will quickly swipe it. At the same time, I must consciously use functions such as 'not interested' and 'not recommend similar content' to independently Set behavioral preferences and 'reverse discipline' the algorithm."

  Li Huan believes that in addition to knowing how to guide the algorithm, it is also important to improve self-control capabilities.

“When information of interest is overwhelmingly recommended, only by learning to exercise moderate restraint, reasonably plan usage time, and widely collect all kinds of information can we truly control the algorithm.”

  Duan Weiwen, a researcher at the Institute of Philosophy of the Chinese Academy of Social Sciences, believes that individuals living in the digital age should learn to adjust themselves and let themselves take the initiative to navigate the digital world, instead of being overwhelmed by fragmented information, or even addicted to digital entertainment and unable to Extricate yourself.

  industry

  Only with self-discipline can we have a good ecology

  Although improving personal digital literacy can help navigate algorithms.

But individuals alone are not enough.

  A survey shows that only about 60% of users believe that the actions of personal guidance algorithms have an effect.

The reason is that various major platforms have overbearing clauses, privacy theft, forced positioning, etc. to varying degrees.

These interviewees believe that companies' proactive self-management behavior is a more effective way to avoid algorithm security risks, and expect companies to strengthen algorithm information disclosure and improve algorithm transparency.

  "On most platforms, if you don't log in to your account, you won't be able to operate normally. What's more, functions such as hidden recordings will be used to record user information, which is generally difficult for users to discover." Zhu Zhen said that algorithm applications have already Penetrating into all aspects of people's lives, in order to seek commercial interests, some platforms use algorithms to conduct "over-recommendation," "traffic fraud," "big data deprivation," and "deep forgery" that harm public interests, undermine fair competition, and disrupt social order. .

  These behaviors have seriously affected the industry ecology and are not conducive to the long-term interests of algorithm service providers.

The entire industry is aware that it should strengthen self-discipline, avoid technology abuse, and establish a value of "responsible innovation."

  In recent years, national and local industry organizations have issued many self-regulatory documents, such as the "Self-discipline Convention on the Application of Algorithms in Internet Information Services" and the "Shenzhen APP Self-Discipline Commitment on Personal Information Protection", etc.

The main contents include safeguarding the rights and interests of netizens, building a solid security defense line, and promoting algorithm fairness, etc.

Several major online platforms in the industry also signed a letter of commitment, promising not to illegally collect and use consumers' personal information, and not to use their data advantages to "kill familiarity".

  At the same time, major network platforms are also conducting self-correction and self-examination of their algorithms internally: some are constantly optimizing the recommendation mechanism, improving the diversity of content recommendations, and expanding information coverage in order to get rid of the "information cocoon"; some are constantly improving management. Rules, optimize algorithm model parameters, proactively clean up illegal and harmful information, and deal with illegal and illegal accounts; some have placed functions such as "turn off personalized recommendations" in conspicuous places, returning the initiative of whether to use algorithm functions to users...

  In addition, publicity and discussion about algorithm ethics have been ongoing in the industry.

These discussions introduce social ethics and professional ethics into the industry value system, advocate openness and transparency in the algorithm development process, and prevent phenomena such as "algorithmic bias" and "algorithmic discrimination."

  In Zhu Zhen’s view, in addition to the popularization of concepts and values, industry self-discipline must also be reflected in specific practices.

For example, remove "overlord clauses" such as mandatory login; regularly disclose algorithm design plans to the public; strengthen privacy leak reminders and provide users with personal information collection lists; open complaint channels, clarify internal reward and punishment mechanisms, etc.

  regulations

  Guide the algorithm upward toward good

  To standardize algorithms, in addition to individual efforts and industry self-discipline, it is more important to establish a comprehensive supervision system.

Only by making full use of the rule of law to standardize algorithms and guide algorithms to be good can we create a clear and safe cyberspace.

  At present, China's policies and regulations directly named after algorithms are mainly the "Guiding Opinions on Strengthening the Comprehensive Management of Internet Information Service Algorithms" and the "Internet Information Service Algorithm Recommendation Management Regulations."

There are also some regulations on algorithm governance, which are embedded in other policies and regulations. For example, the "E-Commerce Law of the People's Republic of China" clarifies the algorithm specifications and responsibilities during the operation of the platform; the "Personal Information Protection Law of the People's Republic of China" establishes the automation of algorithms. Basic framework for decision-making governance.

  Among them, the "Internet Information Service Algorithm Recommendation Management Regulations" (hereinafter referred to as the "Regulations"), which will be implemented on March 1, 2022, has the most systematic and comprehensive regulations on algorithms.

  This regulation, jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation, clarifies the user rights protection requirements for algorithm recommendation service providers such as the right to know about algorithms and the right to choose algorithms. Targeted regulations have been made on hot issues such as “big data killing”, “inducing addiction”, “guiding public opinion” and “protecting special groups” that the society is concerned about.

  "This is the first normative legal document that systematically and comprehensively targets algorithms for adjustment. From multiple aspects such as systematicity, richness of the regulatory toolbox, diversity of adjustment algorithms, and comprehensive protection of user rights and interests, etc. , and the "Regulations" are of groundbreaking significance." said Zhang Linghan, a professor at China University of Political Science and Law.

  After the promulgation of the "Regulations", relevant national departments carried out a number of comprehensive management actions on algorithms, requiring supervision and supervision to rectify problems such as "information cocoons", "algorithmic discrimination" and "big data overkill" caused by unreasonable application of algorithms, and promote comprehensive management of algorithms. Work normalization and standardization.

For example, from the end of 2023 to the beginning of 2024, the Cyberspace Administration of China launched a month-long special campaign to “clear the air and rectify the problem of bad short video information content guidance”.

The content includes strengthening the management of short video platforms; focusing on solving problems such as deviations in the value orientation of short video platform algorithms and insufficient presentation of high-quality short videos; optimizing the traffic distribution mechanism to prevent "emphasis on indicators over quality" and one-sided quantification by like rate, forwarding rate, etc. Indicators serve as the basis for traffic allocation.

etc.

  "Algorithm technology has the characteristics of strong professionalism and fast update speed. This determines that algorithm governance is a long-term work and requires the joint efforts of all walks of life. Therefore, it is very important to establish the concept of 'sharing' only when there is 'co-governance'." Zhu Zhen said.