- Companies The Spanish Agency for Data Protection opens an ex officio investigation into ChatGPT
- Artificial Intelligence Spain asks Europe to examine ChatGPT privacy breaches
The European government is not entirely clear that ChatGPT and other artificial intelligence tools comply with current legislation, especially in terms of data protection, and has decided to take the first step to create common standards for this type of applications.
This week, the European Data Protection Board (EDPB), an organization formed by representatives of the different national authorities in this matter, has announced the creation of a working group that will be responsible for assessing the risks posed by these new tools.
The group has already begun a round of consultations with several experts to try to define the possible actions that can be taken jointly between the different countries and set common guidelines for action.
It is a decision that has been taken at the request of the Spanish Agency for Data Protection, which earlier this week asked the agency to evaluate possible privacy violations by ChatGPT.
"The AEPD understands that global processing operations that can have a significant impact on individuals' rights require coordinated decisions at the European level," an agency spokesperson said in an emailed statement.
Spain, in any case, is not the only European country concerned about this issue. Since the end of March, the Italian government has blocked access to ChatGPT in the country. It claims that it violates some of the data protection rules present in Italy (and which are similar to those of other European countries). ChatGPT, for example, does not have a filter that prevents access by minors and does not warn users that during use the tool collects personal data.
The French counterpart of the AEPD, the CNIL, is also studying several complaints and complaints about the service, which has grown dramatically in recent months and can have a profound social impact.
Several experts and academics, in fact, have recently asked the technology companies that develop these tools (mainly OpenAI and Google), to stop the development of new language models and generative artificial intelligence tools until an ethical and legal framework that protects users can be agreed.
They put forward several reasons for doing so. These language models have been trained, for example, with copyrighted material or without users' permission. There are also doubts about the effect they could have on the labor market or the spread of false news and propaganda.
- Artificial intelligence
According to the criteria of The Trust Project