An exhibition based on artificial intelligence in Brazil (illustrative image).

-

MAURO PIMENTEL / AFP

Faced with the deployment of artificial intelligence (AI), Europe must protect the fundamental rights of its inhabitants, warns a report released on Monday, showing that this technology can indeed be a source of errors and discrimination.

“Much of the interest is focused on its potential to support economic growth.

How it may affect fundamental rights has received less attention, ”writes the European Agency for Fundamental Rights (FRA), based in Vienna, Austria in the 100-page document.

Prague in the top

Artificial intelligence, a somewhat catch-all expression, refers to technologies that allow machines to imitate some form of real intelligence, to “learn” by analyzing their environment instead of executing simple instructions dictated by a developer. human.

This software, which brings together a vast field of applications (voice assistants, voice and facial recognition systems, advanced robots, autonomous cars, etc.), are now used by public authorities as well as by the medical community, the private sector and the 'education.

On average, 42% of European companies use AI.

The Czech Republic (61%), Bulgaria (54%) and Lithuania (54%) are the countries where it is most widespread.

Deployment accelerated by the Covid

Artificial intelligence is particularly popular with advertisers for targeting online consumers through algorithms and "the coronavirus epidemic has accelerated its adoption," according to the report.

FRA investigators carried out around 90 interviews with public and private organizations in Spain, Estonia, Finland, France and the Netherlands.

"One of the risks is that people blindly adopt new technologies, without evaluating their impact before using them," David Reichel, one of the authors of the text, told AFP.

Artificial intelligence can thus violate privacy, by revealing a person's homosexuality in a database for example.

It can also lead to discrimination in employment, if certain criteria exclude categories of the population on the basis of a surname or an address.

Algo and prejudices

When they receive an incorrect medical diagnosis or are denied a social benefit, European citizens do not always know that the decision was taken automatically by a computer.

They are therefore not in a position to be able to dispute it or to lodge a complaint, even though errors can occur: artificial intelligence, created by humans, is not infallible.

In a recent example, the British Court of Appeal ruled that the facial recognition program used by the Cardiff police force may exhibit racial or gender bias.

“Technology is changing faster than the law.

We must now ensure that the future EU regulatory framework for artificial intelligence is unequivocally based on respect for human rights and fundamental rights ”, underlines FRA Director Michael O'Flaherty.

High Tech

"The Guardian" entrusts the writing of an editorial to an artificial intelligence

Culture

Artificial intelligence: "When a machine speaks to us, we project onto it the human capacities", according to Laurence Devillers

  • High Tech

  • Private life

  • Artificial intelligence

  • Europe

  • Discrimination