IA scare me?

The European Commission made public, Wednesday April 21, its proposals to better regulate the development of artificial intelligence and, in particular, the uses it considers the "most risky" of this technology.

This project constitutes "the most significant effort to date [in the world] to regulate AI," according to Wired, a site specializing in new technologies.

The more than one hundred pages of rules on offer cover a wide range of subjects, from the development of algorithms to make Artificial Intelligence applications work, to facial recognition or even the use of AI by recruiters.

A "political act"

In a world where China unreservedly uses these technologies to monitor ethnic minorities, or to rate its inhabitants, and where the United States is reluctant to regulate for fear of falling behind in the "race for AI," this text is above all a "political act", judge Laurence Devillers, professor of artificial intelligence at Paris-Sorbonne University, member of the national pilot committee for digital ethics, contacted by France 24.

"It allows Europe to position itself on the international scene and to defend our values ​​there based on an approach to AI that is more respectful of humans and society", affirms this specialist in ethical issues and artificial intelligence. In this sense, "the most important contribution of this project is that it prohibits certain uses of artificial intelligence, which makes it possible to show what are the red lines for Europe", notes Daniel Leufer, specialist in European policies in terms of new technologies for the NGO Access Now, contacted by France 24. 

These AI non-gratæ are listed in article 5 of the text and cover artificial intelligence systems similar to Chinese "social credit" which allows algorithms to assess the social "reliability" of individuals, or even the implementation a device for "real-time" monitoring of people using facial recognition. There is therefore no question of introducing electronic mass surveillance systems in Europe.

The other great "good point of this text", according to Daniel Leufer, is the idea of ​​setting up a register of AI devices offered on European soil. "This would bring a little transparency to all these tools that can be used, for example, the police," said the expert Access Now. Again, this is a radically different approach from that adopted by the United States and China where there is great opacity.

The aspect of this document that has caused the most ink to flow concerns the Commission's choice to classify the uses of AI by level of risk.

There are those which are considered too risky - therefore prohibited -, those which appear "very risky", "moderately risky" and so on.

For each of the rungs of this new AI Richter scale, there are different, ever more restrictive rules as you climb the rungs of the ladder.

It's risky to assess the risk

"This is the regulatory culmination of the work of high-level experts on AI (GEHN IA) mandated by the European Commission [in 2018] to reflect on the notion of 'trustworthy' AI", explains Jean- Gabriel Ganascia, president of the CNRS ethics committee and expert in artificial intelligence at the Computer Science Laboratory of the University of Paris-6.

An approach that leaves this expert doubtful. "It is difficult to quantify a priori the risk represented by something as new and constantly evolving as AI," he said. In the case of the Commission project, it is individual freedoms and major democratic principles that serve as a barometer to assess the dangerousness of the application of an AI. But "these are very political notions which can vary from one country to another", underlines this expert in ethical questions.

He also fears that this way of presenting AI in the light of the risk for society "is anxiety-provoking for the population and slows down the adoption of this technology". This would then be counterproductive since the stated aim of the European Commission is to create a regulatory framework that promotes its development.

As soon as the Commission assigns a level of risk to the different uses of AI, it is also necessary to know "who decides from when a risk is considered acceptable, which leads us on very slippery slopes", believes Daniel Leufer. . By wanting as proof of the choices, in his opinion, very questionable. He challenges, for example, the Commission's decision not to include AI-doped lie detectors in the uses to be banned and to make them only "high-risk" applications, whereas "he It is in our eyes a dangerous pseudoscience ". The use of these so-called "intelligent" lie detectors in 2018 to try to identify illegal immigrants at the borders of countries such as Hungary or Greece iswas attracted to strong criticism at the time. 

A model to follow?

For Daniel Leufer, the “too vague” formulation of certain provisions also leads to the question of who will have the last word in defining what is “too risky”.

It is therefore not clear who will assess compliance with the rules of an AI in the very sensitive category of "very risky" applications.

However, there are uses such as crime prediction algorithms, recruitment AIs or those integrated into critical infrastructures (such as electricity networks).

"The text seems to suggest that compliance with the rules can be assessed internally, which would be a shame," Daniel Leufer is surprised.

It would be like letting cigarette manufacturers judge how dangerous tobacco is, he notes.

"It is true that there are still holes in the racket," admits Laurence Devillers.

But for her, "it's still much better than the jungle that currently reigns."

She wants to see this text "as a very positive first step which lays the foundations for a discussion on ethical tensions leading to rules accepted by all".

And not just in Europe.

Like the European General Data Protection Regulation (GDPR), "this text may be intended to inspire other countries in the world", hopes Laurence Devillers.

She recalls that the GDPR also at its inception had brought out all kinds of detractors from the woods before becoming a benchmark.

In his eyes, it would be essential for the world to tune in to common rules for the development of AI.

Because "if we want to contain dangerous forms of AI from spreading like a virus, everyone must work together," says Laurence Devillers.

And for her, the Global Partnership on Artificial Intelligence (PMIA), which has brought together around fifteen countries since last year, is "the first link in this discussion".

But before that, it will still be necessary to defeat the cluster of resistance in Europe.

The summary of the week

France 24 invites you to come back to the news that marked the week

I subscribe

Take international news everywhere with you!

Download the France 24 application

google-play-badge_FR