■ Column

On the eve of the imminent productivity revolution brought about by technology, it is far better to think, discuss, and agree on the details of inclusive prudential regulation than to make up for it.

Recently, Italy announced a ban on the use of ChatGPT and restricted OpenAI's processing of Italian user information, while the Personal Data Protection Agency began to investigate the previous ChatGPT case. This also makes the artificial intelligence model represented by ChatGPT have not fully entered the popular application, and it has hit the question of data security head-on.

It should be seen that the accumulation of massive data has brought about qualitative changes in the productivity of artificial intelligence, and at the same time, data security and privacy protection that have always been lingering since the birth of the Internet have also become unavoidable companion problems for artificial intelligence large models.

The protection of personal data in Europe is at the forefront

Historically, it is almost inevitable that ChatGPT will be the first in Europe to encounter data security questions.

Since entering the Internet era, Europe has always been at the forefront of personal data protection in the world, and the relevant regulations are the earliest and strictest in the world.

Since 1981, the European Parliament has adopted the world's first regulation on the protection of personal data, and since then, the relevant regulations have kept up with technological changes and even one step ahead of technology. For example, in 1995, the European Union further passed new legislation on the privacy and movement of personal data, at a time when the Internet era was just beginning.

Nowadays, the concept of personal data protection is mostly defined first in Europe, such as the right to informed consent, the right to be forgotten, the right to data portability, etc. Therefore, Italy's ban on ChatGPT, which is still in beta, by invoking the strictest data protection regulations in the world, seems both abrupt and logical.

Objectively speaking, the current ChatGPT is indeed in the testing and exploration stage in the fields of data privacy, compliance, and ethics, and users are constantly providing new data to train the model while using it.

Because it is still in the black-box test, the legal source of these data, the protection of user data, especially privacy, and the conflict between black-box data and intellectual property rights have brought new challenges to law and supervision. That's why prudent Europe decided to adopt a one-size-fits-all lockdown policy.

The challenge of balancing security and innovation

How to ensure data authorization, define the responsibility of data security and the obligations of technology companies, and avoid risks such as privacy leakage and intellectual property rights will be the first problems that artificial intelligence must solve after large-scale intervention in the public domain.

But at the same time, Italy's one-size-fits-all policy is not encouraged.

Overly strict and one-size-fits-all regulatory policies have stifled Europe's ability to innovate in the Internet era. Today, in the discussion of why Europe missed the era of mobile internet, overly stringent and inflexible regulatory policies have been seen as an important reason.

For example, after the European Union passed the strictest personal data protection law in history in 2016, a number of global technology companies have been involved in repeated data games in Europe.

In 2019, the Data Innovation Center of the US Information Technology and Innovation Foundation released a report pointing out that the EU's data protection regulations have not produced the expected positive effect, but have increased the cost of data compliance for enterprises, harmed the innovation and competitiveness of technology companies, and increased the cost and mistrust of consumers to access online services.

Returning to the field of artificial intelligence, in the context of the global race, it is necessary to set red lines and rules in advance to protect the security of citizens' data, and at the same time, to prevent the lack of innovation caused by excessive supervision, which is a common policy problem faced by all countries, and it is necessary to constantly find a balance between the two.

The only certainty is that from the experience of human experience in the conflict between scientific and technological innovation and ethics in the past hundred years, it is far better to think, discuss and agree on the details of inclusive prudential regulation in advance on the eve of the imminent productivity revolution of science and technology.

□ Malvern (Internet practitioner)