Juline Garnier 14:30 pm, April 28, 2023

Presented as a real technological revolution, artificial intelligences have already invaded the economic and cultural landscapes of the world. If they raise ethical questions, they risk being constrained by data protection policies, as is already the case in Italy. Will the France follow the example of its neighbor? Europe 1 takes stock.

DECRYPTION

The promise of an answer to all questions just a click away. ChatGPT, a prototype in the form of a conversation between a user and an artificial intelligence, is already stirring up the international intelligentsia just a few months after it went live. It must be said that the algorithm is a feat: thanks to a huge database, it can answer any curious question, from writing a philosophical essay to solving a complex mathematical problem. An experiment created by the company Open AI, it has already undergone several updates, improving little by little. And already concerns are emerging.

Faced with the speed of its development - and especially its use by millions of Internet users - many have shown reservations. At the end of March, entrepreneur Elon Musk and hundreds of global experts signed a call for a six-month pause in research on artificial intelligences more powerful than ChatGPT, citing "major risks to humanity". And against the risks related to the security of personal data, Italy was the first country to temporarily block the system.

>> READ ALSO - Macron garbage collector, the Pope in a down jacket... These false images circulating on the net

Fears around personal data

For Fabrice Epelboin, social media specialist and professor at Sciences Po Paris, these artificial intelligences, beyond technical issues, represent an ethical risk. "Because the responses to users are unpredictable. To popularize, thanks to its huge database, the technology 'hallucinates' an answer that corresponds to the user's request but it is impossible to know precisely how the algorithm arrived at this result. You can't have feedback," he said.

More technically, the mode of operation of these artificial intelligences raises questions about the data used. Several property laws are intertwined: that of intellectual property when it comes to production or work, that of personal character which therefore concerns the identity of individuals, or simply that of the origin of data and therefore property, especially for companies.

This leads to two issues, according to Nicolas Arpagian, cybersecurity specialist and vice-president of Headmind Partners. "It is a question of framing the issues of authorization of use and exploitation of data made available upstream" by individuals and entities and "in what frameworks they will be exploited downstream by users" in particular in a commercial perspective, he explains.

How to control usage?

A big question brings together all the concerns: are we able to control the use of these artificial intelligences and therefore of this mass of data? There, several avenues are possible, starting with integrating hidden data into the content produced by the algorithms, allowing traceability. Another solution: develop an artificial intelligence capable of identifying the work produced by... the artificial intelligences themselves. "In the first experiments in this field, we obtain a result of about 40%. The reliability of this device is therefore for the moment very poor," says Nicolas Arpagian.

The prospect of regulation is therefore very complex to tackle. Other European authorities, including those of France, Ireland and Germany, have since approached their Italian counterpart to establish a common position on ChatGPT.

>> READ ALSO - Artificial intelligence: a campaign to collect French voices

The risk of a stillborn regulation project

In 2021, however, the European Commission had already published a founding text, the AI Act, aimed at "regulating artificial intelligence in a way that makes it trustworthy, human-centred, ethical, sustainable and inclusive" and establish a very first legal framework. But this one, in view of the dazzling advances in the field, would already be obsolete. And the task, as we have seen, is arduous.

"One of the big problems encountered is above all that dialogue between legislators and technicians seems impossible to me, because these are not spheres that usually communicate and work together," says Fabrice Epelboin. The risk is to obtain a new "Hadopi", this law regulating the distribution of works and the protection of rights on the Internet, very quickly became inapplicable due to the evolution of uses.

"To regulate, we need a spirit of perspectives," stresses Nicolas Arpagian: to be sufficiently flexible in the presentation of measures to allow reinterpretation according to developments. "By dint of over-regulation, the risk is also to 'ban', that is to say to scare companies that will develop their technology elsewhere," he adds. However, a priority for the European authorities seems to be emerging, according to the expert: reducing fantasies. And for this only one tool: clarify the role of artificial intelligence.