As the use of interactive AI "ChatGPT" is rapidly expanding around the world, countries have begun to consider proposed regulations and certification systems due to concerns about the protection of personal information.

"ChatGPT" developed by the American venture company "Open AI" is rapidly expanding its use in various applications because it can create natural sentences as if it were written by a human when you enter a question.

On the other hand, risks posed by AI, such as the protection of personal information and concerns about information leakage, have also been pointed out.

Under these circumstances, the United States is considering a draft regulation regarding the use of AI, and the US Department of Commerce announced on the 11th that it will start soliciting opinions from the public on AI evaluation and certification systems for the proposed regulations.

In the United Kingdom, data protection authorities have also published considerations regarding the use or development of generative AI systems, including ChatGPT, and mention legal responsibilities when using personal data.

Meanwhile, Canada's privacy authorities announced on April 4 that they have launched an investigation into "open AI" in response to complaints that "personal information is being collected, used, and disclosed without consent."

In Japan, Minister of State for Science and Technology Takaichi stated at a Lower House Cabinet Committee meeting on the 14th that he has no intention of immediately restricting the use of ChatGPT, but that it is necessary to respond to concerns such as information leakage, and indicated that he will strengthen the study system.

Moves to consider regulations in Europe

In Europe, there is a growing movement to consider regulating "ChatGPT" for reasons such as the protection of personal information.

Following Italy's temporary ban on use on March 31, local media have reported that France, Germany and Ireland are considering whether to impose restrictions.

In France, multiple complaints such as "personal information being collected, used, and disclosed without the consent of the individual" have been filed with data protection authorities, and they are reportedly investigating.

The city of Montpellier in southern France is also considering calling on city officials and their families to refrain from using ChatGPT.

In addition, in Germany, a senior official of the authorities told a local newspaper on March 3 that "similar measures are possible in Germany in principle," and it is reported that Germany can take the same measures as Italy, which temporarily banned its use.

On the 13th, the European Data Protection Council, which is made up of data protection authorities of EU = European Union member states, established a specialized working group to discuss future responses.

At the G29 Digital and Technology Ministers' Meeting to be held in Takasaki City, Gunma Prefecture from 7 this month, it is expected that discussions will be held on how to respond to such generative AI technologies.

Temporarily banned in Italy Calls for measures against open AI

The Italian data protection authority announced on March 31 that it would temporarily ban the use of the interactive AI "ChatGPT" on the grounds that it is suspected of violating laws on the protection of personal information, such as the collection of a huge amount of personal data.

Italy was the first country in Europe and the United States to ban its use.

The trigger for this was a voice from outside on the 20th of last month.

Details have not been disclosed, but it is said that there was a data breach regarding the content of conversations and payment information of users of this AI software.

In response, the authorities investigated and found that they had not properly informed users of the data they were collecting, and that there was no mechanism to verify their age when accessing it.

It appears that they were collecting a huge amount of personal data necessary for AI training without legal basis, and this method was suspected of violating Italian law on the protection of personal information.

Italian authorities announced on the 12th of this month that they had instructed the American "open AI" that developed ChatGPT to take concrete improvement measures by the end of this month.

Improvement measures include allowing users to correct or delete data, strict age verification to protect children, and raising awareness of the collection and use of personal information for AI learning through television and the Internet.

Italian authorities have said they will lift the ban if it is confirmed that these measures have been taken by the deadline of Open AI at the end of this month and concerns over the handling of personal information are resolved.

Expert "Utilizing AI: Urgent Response at the Political Level"

We spoke with Alessandro Longo, a journalist who has covered the IT industry and data protection issues in Italy for many years, about the Italian authorities' decision to temporarily ban the use of ChatGPT.

Mr. Longo said that many European countries are paying close attention to AI services, and expressed the view that the measures required of the venture company "Open AI" developed by the Italian authorities should become a global standard not only in Europe but also in the near future.

In addition, Longo pointed out that an accident that occurred two years ago was the background to this regulation.

The accident involved a 2-year-old Italian girl who died in a long-breath play that was popular on a video-posting app.

"Italians are afraid that children will misuse AI or be misled by AI," Longo said, acknowledging that the strong awareness of protecting children from AI risks in society as a whole led to the temporary ban on the use of ChatGPT.

In addition, he said, "There is also an aspect of wanting to make better use of data and lead to business and economic growth for European companies," and pointed out that there is a sense of caution about the outflow of personal information and big data outside Europe and the fact that American IT companies are currently leading the development of AI services.

"The biggest problem is that Italy does not have a comprehensive strategy for how to use AI," Longo said, arguing that it should be addressed urgently at the political level.