The UN called on Wednesday to impose a moratorium on certain artificial intelligence systems such as facial recognition, the time to put in place safeguards to protect human rights.

"Artificial intelligence technologies can have negative or even catastrophic effects if used without taking sufficient account of how they affect human rights," said Michelle Bachelet, the High Commissioner for Human Rights of the UN.

Ban the most dangerous technologies

She called for an assessment of the risks presented by different systems that rely on artificial intelligence for the right to privacy or freedom of movement and expression.

The person in charge then advises to ban, or in any case to strongly regulate, those which present the greatest dangers.

Los gobiernos deben aplicar una moratoria a la venta y transferencia de tecnología de #vigilancia hasta que se Garantice el cumplimiento de las normas de derechos humanos.



Ninguna excused para la inacción.

Hay que hacer una pausa ⏸️

- Michelle Bachelet (@mbachelet) September 15, 2021

But while waiting for these assessments to be carried out, "States should impose a moratorium on technologies that potentially present a great risk", underlined the former Chilean president, during the presentation of a new report from her services. devoted to this topic.

In particular, she cited as an example the technologies that allow automatic decision-making or those that establish profiles.

“AI systems are used to determine who can benefit from public services, decide who has a chance of being hired for a job and, of course, influence what information people can see and share in line, ”she stressed.

Arrests of innocent people because of poorly trained systems

This report, which was commissioned by the Human Rights Council - the highest UN body in this field - looked at how these technologies have often been implemented without the way they work or their impact has not been properly assessed.

AI malfunctions have prevented people from receiving welfare, finding jobs, or leading to arrests of innocent people based on poorly trained facial recognition systems unable to properly recognize people with African physical features for example.

"The risk of discrimination linked to decisions based on artificial intelligence - decisions that can change, stigmatize or harm human life - is all too real," insisted Michelle Bachelet.

The report pointed out that these artificial intelligences are trained with the help of huge databases, which are often built in an opaque manner.

"Feeding human rights violations on a gigantic scale"

In particular, the report highlights the increasing use of AI-based systems by law enforcement agencies including predictive methods. When AI uses biased databases, this is reflected in the predictions and tends to affect areas wrongly identified as high risk. Real-time and remote facial recognition is also increasingly used around the world, which can lead to permanent location of people.

"We cannot afford to continue to try to catch up with the bandwagon when it comes to AI and allow it to be used with little or no control and to repair the human rights consequences after the fact," he said. insisted the High Commissioner, although she recognizes that "the power to serve people of AI is undeniable".

“But so does the ability of AI to fuel human rights violations on a gigantic scale and in almost invisible ways,” she warned.

Health

"With the Covid-19 crisis, health data has burst into our lives," says Coralie Lemke

Economy

Facial recognition, a new target for scammers

  • High-Tech

  • Facial recognition

  • Artificial intelligence

  • UN