【Special attention】

  In recent years, the changes in artificial intelligence (AI) technology have been dizzying.

2023, the year when major breakthroughs were made in the field of language models, has just passed. In 2024, people will usher in breakthroughs in virtual generation of video and physical worlds.

However, as artificial intelligence technology is widely used in science and technology, culture, education, medicine, transportation and other industries, and even enters daily life, the risk of its abuse has begun to cause concern.

It’s hard to tell the authenticity of “deep fakes”

  Highly realistic fake images, audios and videos generated using artificial intelligence algorithms are called “deep fake” content.

Since this concept first emerged in 2017, incidents of using this technology to defraud or manipulate public opinion have become more frequent around the world.

Taking the United States as an example, artificial intelligence fraud incidents have increased by more than 50% year-on-year in the past year.

  However, to this day, people still have not been able to effectively solve this problem.

This is because the advancement of artificial intelligence counterfeiting capabilities far exceeds the development speed of information counterfeiting technology.

Nowadays, anyone can quickly and cheaply generate images, audio and even videos that are difficult to distinguish between real and fake and difficult to trace.

In contrast, counterfeit detection technology is difficult to popularize due to its specificity in subject matter and software.

Additionally, fact-checking takes more time and effort.

According to a survey by Japan's Mizuho Research Institute and Technology Company, 70% of Japanese respondents believe that it is difficult to judge the authenticity of information on the Internet, but only 26% of respondents said they would conduct some level of verification after coming into contact with suspicious information. .

  At the beginning of this year, the World Economic Forum released the "Global Risks Report 2024", ranking misinformation and disinformation generated by artificial intelligence as the first among the "top ten global risks in the next two years", fearing that it will lead to polarization and frequent conflicts. The global situation has further deteriorated.

Considering that 2024 is the global "election year" and more than 70 countries or regions will hold important elections, people are worried that artificial intelligence will be weaponized and used to mislead voters, slander candidates, and even incite violence, hatred and terrorism.

  "The 2024 U.S. election will see a tsunami of false information generated by artificial intelligence." As Darrell West, a senior researcher at the Center for Technology Innovation at the Brookings Institution, said, the U.S. election has just entered the primary stage. Someone is trying to use "deepfakes" to manipulate voters.

In January this year, the U.S. Democratic primary was held in New Hampshire. Many voters said they received a "phone call from U.S. President Biden" before the primary. On the phone, "Biden" advised voters not to participate in the primary and to save their votes for later. Vote Democratic in the November election.

The call was actually arranged by Cramer, a political adviser to Biden rival Dean Phillips.

He used artificial intelligence to simulate Biden's voice and dialed the numbers to voters most likely to be affected.

Kramer even bluntly said: "For only $500, anyone can reproduce my behavior."

  Industry insiders worry that the proliferation of "deep fake" content will cause the public to completely lose trust in their own senses, and true information will also be questioned more.

For example, in 2016, Trump strongly claimed that a scandal recording related to him was fabricated.

Had this happened now, his sophistry might have been more convincing.

  Similar problems have arisen in many countries around the world. However, the development of technology is ahead of the laws and industry regulations of various countries. Currently, we have to rely more on the "self-regulation" of technology companies to solve the problem.

During the 60th Munich Security Conference in February, a number of technology companies around the world signed an agreement, promising to work together to combat artificial intelligence abuse aimed at interfering with elections, including developing counterfeit detection tools and adding "unauthorized" to generated images. "Realistic Content" label and electronic watermark to identify its source.

Some companies are also considering banning the generation of images of political candidates.

However, some media believe that the above-mentioned agreement only outlines some of the most basic principles and does not specify specific measures and timetables for companies to fulfill their commitments. The empty collective statement is more like a public relations activity.

What's more, whether technology companies' subjective control of their artificial intelligence products means imposing the company's values ​​on users is also a question.

  In addition, electronic watermarking technology still has shortcomings.

For example, the watermark provided by the Content Provenance and Authenticity Alliance can only be used on still images and "can easily be removed accidentally or intentionally."

Therefore, the difficulty of controlling the generation of false information from the source, limiting the spread of false information on social media, and cultivating the public's critical thinking are still difficulties that countries have to face when solving the problem of "deep forgery".

The shadow of “AI war”

  Although the surge in public interest in artificial intelligence is mainly due to the development of generative artificial intelligence, many militaries have already turned their attention to the application of artificial intelligence technology on the battlefield, especially autonomous weapons systems capable of deep learning.

The U.S. strategic community compares the development of artificial intelligence to the emergence of nuclear weapons in history. Elon Musk, a well-known American technology entrepreneur, also believes that the proliferation of artificial intelligence technology has allowed companies from all over the world to "build nuclear bombs in their own backyards."

  The use of artificial intelligence can be seen in two of the world’s most high-profile conflicts.

The US "Time" magazine recently published an article calling the Russia-Ukraine conflict "the first artificial intelligence war", revealing that the American technology company Palantir provided artificial intelligence software to Ukraine and provided intelligence by analyzing satellite images, drone footage and other information. Strike your enemy's most effective targets and learn and improve with each strike.

According to reports, these "artificial intelligence arms dealers" seem to regard Ukraine as the best testing ground for their latest technologies.

  In the Israeli-Palestinian conflict, the Israeli military has used artificial intelligence technology to destroy drones, map tunnel networks, and recommend targets for strike.

According to reports, an artificial intelligence system called "Gospel" has improved the efficiency of the Israeli military in finding attack targets hundreds of times.

Many media are worried that this may mean that the system is not only targeting military facilities, but also bombing civilian residences. "Artificial intelligence may be being used to determine the life and death of Gaza residents."

  The United States, which experimented with artificial intelligence target recognition for the first time as early as 2020, has recently used this technology extensively to locate rocket launchers in Yemen, surface ships in the Red Sea, and strike targets in Iraq and Syria.

  According to US media reports, the US military has also continued to strengthen cooperation with leading companies such as OpenAI.

In August 2023, not long after the wave of generative artificial intelligence took off, the Chief Office of Digital and Artificial Intelligence of the US Department of Defense quickly established the "Generative Artificial Intelligence Working Group".

In January of this year, OpenAI quietly updated its “Usage Policy” page, removing restrictions on “military and war” applications and changing the terms to a more vague “prohibition of using products to develop or use weapons.”

Soon after, the company admitted it was working with the Pentagon on several projects, including developing cybersecurity tools.

Recently, senior officials of the U.S. Department of Defense once again invited U.S. technology companies to participate in a secret meeting, hoping to accelerate the exploration and implementation of military applications of artificial intelligence.

  Relevant experts believe that mankind's success in curbing the use of nuclear weapons in the past few decades relies on strategic coordination among countries.

At present, the world lacks an international governance framework for the military application of artificial intelligence. It is extremely easy for conflicts to escalate or trigger an arms race due to out-of-control technology. There is an urgent need to reach multilateral consensus as soon as possible.

United Nations Secretary-General Guterres has previously emphasized that the United Nations must reach a legally binding agreement by 2026 to ban the use of artificial intelligence in automated warfare weapons.

International coordination and cooperation are indispensable

  Artificial intelligence has great potential in both positive and negative aspects, and timely measures need to be taken to prevent risks.

Since 2016, various countries have successively issued relevant policies and regulations, but progress still cannot keep up with technological progress.

In October last year, Biden signed the first executive order on artificial intelligence regulation in the United States to establish security and privacy protection standards for artificial intelligence, but it was criticized for lacking enforcement effectiveness.

In the EU, although the European Parliament passed the Artificial Intelligence Act on March 13, the relevant provisions in the bill will be implemented in phases, and some rules will not take effect until 2025.

Japan’s ruling Liberal Democratic Party only recently announced plans to propose the government introduce generative AI legislation within the year.

  Whether it is the problem of disinformation brought about by generative artificial intelligence or the risks of military applications of artificial intelligence, its impact transcends national boundaries.

Therefore, the supervision and governance of artificial intelligence should be solved through international cooperation. All countries should jointly prevent risks and work together to build an artificial intelligence governance framework and standards with broad consensus.

  Regrettably, however, the United States not only regarded China's artificial intelligence development plan as a hostile act from the beginning, but also regarded China as an imaginary enemy when deploying artificial intelligence military applications. In order to maintain technological advantages, the United States has long hindered China's technological progress.

According to reports, the U.S. government not only bans U.S. companies from exporting artificial intelligence chips with the most powerful processing capabilities to China, but also requires U.S. cloud computing companies to disclose the names of foreign customers who develop artificial intelligence applications on their platforms in an attempt to block access by Chinese companies. Data centers and servers.

  The development and governance of artificial intelligence is related to the destiny of all mankind and requires collective efforts and coordinated response.

Building "small courtyards and high walls" in the field of artificial intelligence will only weaken mankind's ability to jointly respond to risks and challenges.

The United States should earnestly respect the objective laws of scientific and technological development, respect the principles of market economy and fair competition, stop maliciously obstructing the technological development of other countries, and create good conditions for strengthening international coordination and cooperation in the field of artificial intelligence.

(Our reporter Yang Yifu) (Source: Guangming Daily)