There is no doubt that the most striking technical news during the past few weeks was the announcement (1) of the international company (IBM) IBM to stop its projects to work on facial recognition technology, which came in the form of a letter to the US Congress in which he rejected "Arvind Krishna “The CEO of the company uses face recognition technology for group surveillance, ethnic profiling and violation of human rights and fundamental freedoms.

What happened was not a decision by a single company, two days after the "IBM" speech, Amazon (2) announced that it would suspend the police's use of its facial recognition software - called "Recognition" - for one year, then Microsoft followed it (3) within days Few announced that it would suspend sales of its facial recognition technology to the police until legal controls were put in place to use the technology, while Google (4) had taken similar steps a year ago from now, and in the United States of America and some countries, some cities and states prevent Use technology legally on its territory.

All of this came against the background of accusations by the US police of misusing facial recognition techniques during the uprising that followed (5) The killing of American citizen George Floyd on May 25, 2020, while he was installed on the ground for the purpose of his arrest by the city police, but this comes in particular In a framework related to facial recognition technology itself, which, like many in the United States of America, appears to be biased against blacks as well!

In fact, the IBM decision was not the result of the moment as some envisioned or just a reaction to the Floyd uprising. Rather, the problem started about five years ago when American Joey Bulamoye was a student at the Georgia Technical Institute and faced a somewhat strange problem. While she was She works on designing a robot that can play a children's game with her, the eyes of the robot - his camera - were unable to recognize her face, but did not even know that there was any face. Polamoyen was using a popular software known to be accurate in identifying faces, and because of that problem she had to use one of her roommates to do the experiment instead, because it seems that the robot is facing a problem with Polamoye's face specifically, but does he really do that ?!

Of course not, the whole problem is that the skin of Paulamini is black, and the algorithm on which this program was built is more accurate with the white skin owners, the matter was repeated in more than one situation, especially as the work of Bulamini required algorithms similar to the degree that paid her to investigate the matter and transferred to her doctoral thesis which is What later became known as the Gender Shades Project, which built itself on a research paper (6) written in cooperation with Yemeni Gibro from Microsoft labs, was released in 2018.

This study attempts to compare the responses of different facial recognition systems in relation to gender and race. For this task, Bulamoye used a thousand pictures of people of different races and of both sexes, and then used software issued by three companies, which are "IBM" and "Microsoft" and "Vis ++", to examine those images, here the results came to say that the accuracy of those The software was only about 10% less favorable to men with white faces than men with black skin, and it seemed clear that the darker the color of the skin - from complete blond to full black - the greater the error.

But the main problem arose (7) when Bulamoye compared the error rates between white men and women with brown skin, where the error - in the IBM software - reached 34%, and the darker the skin, the error reached nearly 50%. These results sparked intense controversy in the technical and public circles, not because it is the first of its kind, because the topic has been raised before and there is a lot of research on it until now, but because it is the first time that it tests software used in the market already.

After the study was released, IBM tried to improve the degree of prejudice against blacks in its facial recognition tools, adding a million images to the artificial intelligence system that learns to recognize the face in order to improve the diversity in the algorithm, because one of the problems was mainly that the algorithm was trained to A greater number of white faces vs. brown on their hue, which prompted them to build their equations on foundations that emanate from this type of skin, along with other accompanying features.

The results actually improved, but not by much, but the biggest problem was that the company obtained those images from the Flickr platform without permission from photographers or the people using their photos, which in turn began a new round of controversy (8), not only about racial discrimination, but also about the possibility Using those tools to violate human rights, at that point - and during 2019 - IBM had started pulling the tools of that technology from its platform little by little, right up to the moment of George Floyd.

Talking about the relationship between human rights and the use of these tools based on artificial intelligence capabilities for facial recognition was not only related to recent achievements by "IBM" and its comrades, in fact, "IBM" specifically is not a very heavy player in this range as much. Enough to make a huge impact on his future technical and economic moves as it stalled, but a more terrifying picture emerged last January with a New York Times investigation (9) of a very recent initiative called "Clearview Ai".

This initiative aims directly to deal with law enforcement agencies, and it already deals with 600 of those entities in the United States alongside some private companies, the strength of this initiative comes from the number of images obtained from Facebook, Twitter, YouTube, Instagram and other social media platforms, These images are developed by users in general, which means, according to the initiative, that they are available for use. Clear-View has collected (10) more than 3 billion photos!

With this number, the accuracy of the "Clear-View" facial recognition technology has reached more than 99%, we are in front of a search engine that looks like Google but only searches for your images everywhere possible on the Internet, claims Huan Ton That, the founder And the initiative's executive director, that this technique is only designed to search for criminals who have already received legal rulings, but the problem - as usual - lies in the weight of this technology, can some people misuse it? How can this be censored?

Thus, the withdrawal of "IBM" and its comrades, no matter how heavy, from the race to work on developing or only trading with governments in facial recognition techniques may not affect significantly or permanently the course of technology development that seems to never stop, And from time to time initiatives such as "Clear-View" (Pintereer Model (11)) appear with a new shock, and even a report issued last March from "Grand View" for market analyzes had indicated (12) that the support or demand that Development businesses receiving this technology will cause 14.5% growth by 2027.

In addition, outside the United States, especially in non-democratic countries, there is an overwhelming desire on the part of the authorities to obtain this type of technology, which will help this technology to develop, no matter how strong those who stand against it. (For more on the Chinese experience specifically in this matter, its capabilities and its prospects, you can review (13) a previous report by "Meydan" entitled: "The Big Brother is Watching You" ... thus China has made the largest system for monitoring citizens in the world.

On the other hand, the withdrawal of these companies is not completely empty, of course, their research contributes to developing technologies faster, and therefore their stopping will affect somewhat, but the most important idea is that their withdrawal raises a lot of noise about the matter in all societies around the world , Which may help enforce laws that limit misuse, whether through government agencies of any kind, or only by someone you do not know.

Imagine that one of them saw one of them in a cafe, not knowing anything about it, but he could simply take a picture - without knowing - with his camera and then enter it on any of the facial recognition applications, here he will be able to access all of her accounts and maybe her place of residence or work If this person is more reckless, then he may suddenly decide to visit her at home and cause her a lot of problems, this is only one example, and you can open the door to your imagination to expect what could happen because of the spread of such a technology.

Facial recognition techniques are really useful, and may help societies better to get rid of crime, but the irony presented by the digital age - in its simplicity - goes deeper than this: either safety or privacy, if the security aspect is lost, you lose a lot of privacy, and freedom by extension, and if it takes sides Privacy lost a lot of safety.

One of the most famous examples of this type of controversy is what is currently happening in the case of the new Corona, in some countries it is decided (14) to use intelligence tracking programs on ordinary citizens, this may develop, especially since intelligence is inherently insatiable to information, for permanent Internet surveillance programs under the pretext of health protection . If you meditate a little, you will find that health is the best you can use to bargain people for their freedoms, and they will agree to anything as long as it keeps them safe.

The future with facial recognition techniques is unknown, it may be better, but it will change the form of privacy as we know it forever, which in turn may be harmful to humans, technology is simply not just a knife, it is a choice in using it between cutting tomatoes or killing your neighbor, but it - as he indicates Jack Elleau in his book "The Technology Hoax" - has a strong impact on us, once we have it until our priorities change in its use, it develops from the scientific achievements of relativity theory and the quantum to the Hiroshima and Nagasaki bombs, as well as for facial recognition techniques, yes, it may give you Better life features, but intelligence services may give your life better features too!

_____________________________________________

Sources

  • IBM CEO's Letter to Congress on Racial Justice Reform
  • We are implementing a one-year moratorium on police use of Rekognition
  • Microsoft Won't Sell Facial Recognition Tech to Police
  • Google favors temporary facial recognition ban as Microsoft pushes back
  • How George Floyd Was Killed in Police Custody
  • Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
  •  Previous source
  • Facial recognition's 'dirty little secret': Millions of online photos scraped without consent
  • The Secretive Company That Might End Privacy as We Know It
  •  Clearview AI's Database Has Amassed 3 Billion Photos. This Is How If You Want Yours Deleted, You Have To Opt Out
  • Palantir Knows Everything About You
  •  Facial Recognition Market Size, Share & Trends Analysis Report By Technology (2D, 3D), By Application (Emotion Recognition, Attendance Tracking & Monitoring), By End Use, And Segment Forecasts, 2020-2027
  • "The big brother is watching you." This is how China built the world's largest citizen monitoring system
  • Yuval Noah Harari: the world after coronavirus