In November 2019, the National Commission for Computing and Freedoms called for facial recognition "a debate commensurate with the challenges".

In April 2021, as

Politico

unveiled draft European regulations on artificial intelligence, the debates resumed with renewed vigor, with 51 associations for the safeguarding of individual rights calling for a ban on “biometric mass surveillance”.

It is in this context – and while the Senate has just published a report calling for the creation of a framework for the use of facial recognition in the public space – that the AI ​​Regulation Chair at Grenoble University – Alpes publishes a mapping in six chapters of the uses of facial recognition in Europe.

20 Minutes

interviewed Théodore Christakis, director of the team who worked on this long-term project.

“The current debate is sometimes distorted by a poor knowledge of [facial recognition] and its exact operating methods”, declared the CNIL in 2019, which you quote from the introduction of your study.

How has this observation influenced your work?

When the CNIL declared that “a debate commensurate with the challenges” was needed, we also noticed that the debates on facial recognition often mixed up a lot of different things.

In some discussions, we went so far as to mention emotion recognition or video surveillance, whereas these two tracks are not facial recognition.

However, there are real questions to be asked...

The technologies used by PARAFE, when you are at the airport, or by the British police, to spot a person in a crowd, are both based on facial recognition but do not raise the same risks at all.

With my team, we therefore decided to bring our scientific approach to the discussion: our desire was to clarify things from a technical point of view, to detail the existing practices across Europe and to learn from them. to allow legislators, politicians, journalists and citizens to debate calmly.

You propose a classification of the uses of facial recognition into three main categories: verification, identification and facial analysis – which is not facial recognition in the proper sense, but nevertheless plays on the use of elements of the face.

Why should the three be distinguished?

The first category (in blue in the illustration) is also called authentication.

It consists of comparing one image to another: your biometric passport photo with the one PARAFE takes when you walk through the gate at the airport, for example.

The machine checks if the two match, if so, it opens the doors, and then it deletes the data.

This does not prevent the presence of risks or problematic uses: when this type of technology was used in two high schools in Marseille and Nice, for example, the CNIL considered that it was not acceptable.

But it's still different from ID systems, which are only used in the UK at the moment.

There, we are talking about cameras that the police place on the road or near a station, which scan the crowd to look for matches with a pre-established list of a few thousand criminals.

In such a case, the issues are very different: the individual does not have the power to refuse to be subjected to the technology, the surveillance is carried out without control... That said, this type of technology is also used in experiments like Mona, at Lyon airport.

There, if the user wishes, he can register his face on his smartphone and then go through all the checks – baggage drop-off, customs, boarding – without ever taking out his boarding pass.

He has a choice, therefore the question,

In the third part of your report, which deals with facial recognition in the public space, you emphasize the difference between “consenting to” and “volunteering to” the use of facial recognition technology.

What is at stake?

First, it should be emphasized that even if a use is said to be “consensual” or “voluntary”, that does not prevent it from posing a problem.

In the case of PACA high school students, for example, it was considered that their consent was problematic because they were under the power of their school.

Then, if we take the example of airports again: when you arrive in Paris or Lyon, you can choose to go through the door equipped with facial recognition systems, but you have an alternative.

This is what volunteering is: there is always another possible choice.

Consent must be given by well-informed people, capable of consenting, etc. (the GDPR provides four cumulative conditions: it must be free, specific, informed, unambiguous, editor's note).

Subtlety is important,

especially when the debate turns in the direction “let’s ban all facial recognition”.

This way of approaching the problem forgets that the technology has useful functionalities: some use it to unlock their smartphone and if others do not want it, they use a pin code.

A choice is possible.

Anyway, as a user, these two proposals put me at a very different risk from when I am subjected to a system that considers me a potential criminal because I crossed a road in front of a camera of the police.

The fourth part of your report deals with the use of facial recognition in criminal investigations.

What are the terms of the debate, in your opinion?

There are many different uses for these.

Let's first imagine that there is a robbery or a murder.

The criminal was filmed by a live webcam.

In these cases, France has legislated to authorize the police to compare the image of the criminal to the processing file of criminal records (TAJ, whose very existence is debated, editor's note).

It's facial verification: it raises its own set of questions, but it's quite different from applying facial recognition algorithms to video streams, as was tested during a carnival in Nice - on the basis of consent – ​​or as it is used in Britain.

The last part of your study will focus on the uses of facial analysis in the public space, still not very present, but which should multiply, according to you.

Why is it important to worry about it?

Mask-wearing detection models like the one proposed by Datakalab are not facial recognition because there is no creation of so-called “biometric templates”.

But it's still facial analysis, so obviously there's something to be concerned about.

It's the same for emotion recognition technologies.

When it comes to detecting if someone falls asleep at the wheel, it's very good, it can save lives.

But when we tell you that we will allow to detect the personality or the lies, we are almost in pseudoscience!

(On this subject, read the chapter dedicated to emotions in the

Atlas of AI

by Kate Crawford: Théodore Christakis declares himself “completely in agreement” with the analysis of the researcher, editor’s note).

Make statistics on the wearing of the mask, why not.

Putting facial analysis at every job interview is more debatable.

What are your main recommendations, for legislators and/or citizens?

Clarify the debate.

Getting informed – that's why we did this work – but also and above all clearly specifying the cases we are talking about.

This will allow us to first look at the most dangerous uses of facial recognition.

This is important: the Senate has just submitted a report on the question, it pleads for the creation of a European controller, everyone must be able to grasp the question with precision, by taking an interest in each type of use of these technologies.

This will also make it possible to better see where there are already laws that allow a minimum framework and where the shortcomings are the most glaring.

Justice

Facial recognition: La Quadrature du Net wants to file a complaint against the Ministry of the Interior

Nice

Nice: The university withdraws video surveillance cameras after complaints to the Cnil

  • Artificial intelligence

  • Facial recognition

  • Surveillance

  • high tech

  • Company

  • Security