• The European Commission has submitted a bill to regulate the use of artificial intelligence.

  • Several thousand amendments have already been tabled by the various countries.

  • Many questions divide the European partners, including the very definition of artificial intelligence and the acceptable framework for its uses.

In April 2021, the European Commission unveiled an ambitious artificial intelligence (AI) regulatory project.

Together with the Parliament, the Commission is seeking a legal approach that would support innovation, but respect “European values”: privacy and human rights.

Problem: the very definition of what artificial intelligence is varies from one interlocutor to another… They are neither software nor statistical methods.

Some AIs are weak (hyper-specialized), others would be strong or general (capable of moving acquired abilities in one area to another, quite different one).

Agreeing on a law will however imply agreeing on a precise way of qualifying it.

Then will follow a string of questions, starting with that of responsibility: in the event of a problem, to whom should you turn?

The builders ?

The suppliers of the final software?

More besides ?

Which risks for which systems?

In its bill, the European Commission has introduced a categorization of algorithmic tools according to four types of risk: unacceptable, which will lead to a ban;

the high risk, which will require compliance with various directives before being deployed;

limited risk, which will call for transparency in order to be able to be corrected, and minimal risk.

Among the prohibitions, the European Commission has listed subliminal manipulation or social rating, as used in China, or even predictive policing.

But that will not prevent heated debates: security is one of the fields in which the countries of the Union do not like to see their policies dictated.

The other categories also bring together very diverse issues on which it will be necessary to harmonize.

For example, systems used for examinations, to facilitate recruitment or to assist in legal decisions are considered “high risk”.

It is also in this classification that fall many elements relating to surveillance in the public space, a particularly flammable issue.

Will the European regulation prevent surveillance in public space?

What to do with biometric recognition algorithms: do we allow ourselves to be used in certain specific cases, in the event of a terrorist attack, to find victims of kidnapping?

Do we prohibit everything, as a group of European associations enjoins?

In October, members of the European Parliament called for the outright banning of facial recognition in public spaces and predictive policing technologies like those tested by Palantir.

The resolution also targeted private databases such as that of Clearview AI.

Germany is also among the countries that are pushing for a complete ban on these technologies, in public and private spaces, on the grounds that it would establish mass surveillance.

The reactions were not long in coming, which underline a risk of the Union's dependence on other countries if its laws block the innovation of its own companies.

In France, where several experiments have been carried out in more or less legal frameworks, and raise the question of the use of facial recognition.

The reasoning of the authors of a recent study on the uses of facial recognition in Europe or of the three senators who authored a report on biometric technologies is as follows: to authenticate someone, as when PARAFE compares the photo of your passport to the one he takes from you, at the airport, does not raise the same issues as identifying a person in the crowd, as the London police can do.

What do the risks of discrimination cover?

Another major theme that European regulations must tackle is that of encoding inequalities.

In the Netherlands, for example, algorithms for managing welfare fraud led to 26,000 families being wrongly accused and made to repay debts they had not contracted, sometimes leading them to financial peril.

If the case led to the resignation of the government, at the beginning of 2021, it is also an archetype of the social risks posed by artificial intelligence.

The tool has also been accused of racial profiling, which raises another major axis of algorithmic discrimination against which the European Union must protect itself.

If they are regularly improved, it is common knowledge that facial recognition technologies work less well on dark skin than on light skin, for example.

However, several American cases have shown that biased results led to people being wrongly prosecuted because of their skin color.

Actors like the NGO Access Now call for urgent supervision, if only because the Union is testing different algorithmic tools at the borders, in the management of migrant populations.

Other big topics of debate?

The simple classification of certain algorithmic tools raises its share of discussions: if the European Commission has placed those for recognizing emotions in the "low risk" category, for example, entities such as the CNIL qualify them on the contrary as "highly undesirable". .

And what about advertising tracking systems?

Are they high risk, or only moderate level

Another big challenge is that posed by the degree of transparency and explainability of the algorithms.

Technology companies are relatively reluctant to give access to their source code to outside representatives (auditors, regulators), for one thing.

But the European regulation also provides that the data sets used to train the algorithms are error-free, to facilitate the justification of the results provided.

This seems very complex to achieve when you know that the ten most used datasets by the industry are full of them.

What's the calendar ?

For human rights protection associations, the text proposed in April 2021 was far from being precise enough to ensure the preservation of the rights of Europeans.

On the other hand, the European Parliament's Special Committee on Artificial Intelligence in the Digital Age expressed in a November 2021 report its concern about a possible limitation of innovation.

After much discussion, the Internal Market Committee and the Parliament's Civil Liberties Committee jointly took up the bill.

Theoretically, legislators should reach compromises on the amendments tabled by mid-October, to vote on a final version in November.

The text can then enter the trilogue phase, ie negotiations between the Parliament, the Council and the European Commission.

But some observers doubt the possibility of keeping such a timetable, given the highly sensitive nature of the subjects covered by this regulation.

On June 1, specialist journalist Luca Bertuzzi announced that nearly 3,200 amendments had been tabled, which suggests intense discussions in Brussels this summer.

Health

Artificial intelligence at the service of researchers to manufacture drugs

Company

Incidents at the Stade de France: Estrosi advocates the use of facial recognition during major sporting events

  • Artificial intelligence

  • European Union (EU)

  • Future(s)

  • Culture

  • Europe