Washington (AFP)

Face recognition systems are not always reliable, especially when it comes to non-white people, according to a report from the US government released Thursday, which heightens doubts around the deployment of this artificial intelligence technology.

Facial recognition mistakenly identifies Asian or black people 100 times more often than white people, according to this study, which analyzed dozens of algorithms.

Researchers at the National Institute of Standards and Technology (NIST), a government-affiliated center, also spotted two algorithms that attributed bad sex to black women in 35% of cases.

Facial recognition is already widely used by authorities, law enforcement, airports, banks, businesses and schools. It is also used to unlock certain smartphones.

Many human rights defenders and researchers are trying to curb this deployment.

They believe that algorithms make too many mistakes, that innocent people could end up in prison, or that databases could be hacked and used by criminals.

The algorithms developed in the United States had higher error rates for Asians, African-Americans and Native Americans, according to the study, while others designed in Asian countries managed to identify the Asian than white faces.

"This is encouraging because it shows that using a more diverse database leads to better results," said Patrick Grother, the research director.

But for the NGO American Civil Liberties Union (ACLU), this study proves above all that the technology is not developed and should not be installed.

"Even government scientists confirm that this surveillance technology is flawed and biased," said Jay Stanley, an ACLU analyst. "Misidentification can cause theft to be missed, endless interrogations, placement on watch lists, tensions with police, baseless arrests or worse."

"But above all, whether the technology is reliable or not, it makes it possible to set up undetectable and ubiquitous surveillance on a scale without common measure," he added.

© 2019 AFP