Suppose you are a Palestinian journalist covering a medical topic, which is the "Covid-19" pandemic. You write several reports and transmit parts of them to Facebook with a link for those who would like more, but you are surprised that Facebook deletes your post that has nothing to do with politics, and even warns you in a strong tone, then you will realize that the algorithms It is not as smart as you thought. It does not see the contexts, but only observes the phrase “war on“ Covid-19 ”in Palestine, as something that contains“ war ”and“ Palestine ”in one domain, and therefore that means a violation of the standards.

We will apply this more broadly to almost everything else. For example, you will not be able to cover the current battle, because as soon as Facebook's algorithms smell words like "Hamas" or "resistance" or "Jews" they will equip the deletion and blocking knives, and this is because its algorithms are immunized. With anti-hatred of Jews and Zionism, it was built mainly to resist the publications issued by the supporters of the extreme right of Nazi ideology, but it is now used to oppose the Palestinian resistance, whether in its peaceful or military forms, and thus only the occupation state is allowed to publish its ideas and its biased viewpoint, in this report it will become clear to you why it can become The algorithms are misleading, and why should platforms like Facebook and others control the contexts in a way that allows everyone to present their case, otherwise it will be like a war machine, contributing to the killing of innocent people.

"Everyone thinks the algorithms are objective, correct and scientific. This is a marketing ploy."

(Cathy O'Neill)

Kyle is a quiet and handsome young man, you will tend to feel comfortable during a short conversation with him, while he was in Atlanta to complete his university studies, he decided to work to assume part of his responsibility towards his school fees, so he applied to work in one of the largest retail chains there, namely Krogers stores, at the company's headquarters, was asked to take a simple personality test consisting of several dozen questions, and after several weeks of submitting his papers, he was rejected by the company.

Kyle Beam

Mostly, what happens in a situation like this is that you are looking for another job and it's over, and Kyle was already preparing for that, not caring about what happened, but his father (1) who works as a lawyer, Mr. Bim, was so confused, it's just a retail store. What could be so complicated in this business that Kyle could not afford it ?!

Here, Mr. Bim asks his son about the nature of this test that he underwent, and Kyle replies that it is similar to the questions he was answering in hospital while he was receiving treatment for bipolar disorder.

Laws in America - and around the world - prohibit the use of mental health tests in selecting job applicants, but when Mr. Bim and his son decided to apply for other large companies, they found similar tests, and you can even find tests of this kind in companies such as "Radio Shack" or " McDonald's, "here you may ask: Did these companies intend to set aside a specific category of people - those with mental disorders - from working for them?

In fact, that hasn't happened, and these companies already know the law.

The problem, however, was how those tests were designed by data analysis companies.

AI algorithms work with a simple mechanism that Cathy O'Neill explains in her book "Weapons of Math Destruction" when she cites the example of feeding her children.

Being a mother, success in this task can be defined as the largest possible amount of healthy food over a period of only one day, so she tries to use all possible tricks to perform this task, such as attaching gifts with vegetables and fruits, or sweetening milk with honey, or that She allows the chocolate only after completing the whole vegetable plate. When one of these methods works with her children, Cathy focuses more on developing that method in order to get the best possible success for her mission in the future.

In the world of self-learning AI algorithms, the same thing happens. When you ask a data company to design a mechanism to evaluate new employees in your business, the maker of the algorithm sets a definition for a successful employee, for example the one who received a promotion after three years, during which there were less than three reservations about him, while keeping the productivity level within 70-80 %. When this employee appears, the algorithm takes a model from him and tries to repeat it, and with each cycle the accuracy of the algorithm in selection improves, through what we call feedback loops (2). If you own a candy factory, the marketing department in your factory polls the opinions of people with the batch. Your first production, these opinions are used to develop the next batch, then the next, then the next, and so on.

Here the problem appears, there are certain societal groups that are exposed to severe pressure because of the nature of their sex, race, mental state, or even political status. Take blacks in the United States for example - the most studied example -. Black and white are close, yet the number of marijuana users who are put in prison tends to be black skinned by a difference of three to four times.

Now suppose you are a governor of a city and want to reduce crime in your surroundings. Big data companies introduce a new concept in the police world called "Predictive Policing" (3), and the idea is to feed those algorithms that are capable of self-learning - via Artificial intelligence - with citizen data in the city, the algorithm learns immediately that blacks are closer to being imprisoned for petty crimes - due to the bias of white policemen from the start - so it recommends that more police vehicles be placed in black neighborhoods, which causes the deposit of greater numbers of blacks. In prison, and so the feedback loops continue.

In her book Automating Inequality - How Does Technology Classify and Punish the Poor? (Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor) is an example of an algorithm used to predict the possibility of exploiting or harming a child in Allenie County, Pennsylvania. This algorithm obtains its data from surrounding government institutions, offices that In turn, it deals with the working classes or the poor, and due to data bias, the algorithm will conclude that the exploitation, neglect or abuse of children in the poor classes is very common.

In reality, child abuse is already represented in the poor classes to a greater degree, but it is not that huge difference that the algorithm represents, so the algorithm - through feedback loops - places a greater value on poverty during its calculations, and here it can completely turn a blind eye to families The middle class, in which children are also abused in a clear rate, but they do not go to government offices, clinics, or hospitals, but they deal with private entities that do not give their data to the algorithm.

The problem of data bias appears more clearly in the case of facial recognition technologies, as it fails to recognize the face of anyone who is not blonde by a large difference from those with fair skin, one of the reasons is that it was originally fed more data for white people, so it learned to set a standard. Mainly, when we use these technologies in police stations, they may confuse people of other races and be imprecise, which is a problem for them.

Joy Polamoyne, now at the Massachusetts Institute of Technology (MIT), is interested in comparing the responses of different facial recognition systems with respect to gender and race.

In her research paper (4) issued in 2018, the results confirmed that the accuracy of this software was about 10% less in favor of white-faced men than black-skinned people, and it seemed clear that the darker the skin color, the greater the error.

But the main problem arose when Polamoyne compared the error rates between white men and black-skinned women, where the error - in a popular IBM software - reached 34%, and the darker the skin was, the error reached about 50%. , These results ignited a fierce debate in the technical and public circles, not because they are the first of their kind, but because it is the first time that they are already testing software used in the labor market.

This problem also appeared in the field of health care, and a study published in 2019 in the prestigious "Science" journal had indicated (5) that due to the previously existing disparity in the data of the American health system, the algorithms used to compare between treatment candidates show the same biases, thus dealing with The diabetic black is more expensive than the diabetic white at the same age and pathological criteria, here you will choose the white to precede the black in the treatment queues.

Algorithms seem neutral by nature, do you notice?

All it does is repeat the feedback cycles over time to reach a result that fits the definition of success, but the problem is that it amplifies the biases that already exist, in the scope of work, for example, women face major problems in certain jobs due to bullying and stereotyping when they ask an algorithm. To assess the qualities of those who are successful in that job, it will automatically dismiss female applicants because they do not see our human biases, they only see data.

Add to this another big problem, which is the threat of the stereotype (6) (Stereotype threat), which is defined as the tendency of a particular group of people to confirm the society’s idea of ​​her. While trying to study these topics, she loses self-confidence and fails to actually confirm the image of society about her, although this was essentially not a problem related to her abilities.

Thus, Ali, who is an Egyptian Muslim immigrant to a foreign country, when he applies to work in a scientific institution and is subjected to the initial test, he may be exposed to a similar problem only because he is Egyptian or because he is Muslim, so the algorithm - if he is not interfered with to modify it - can put more weight on that data. Simply because its definition of success takes into account that there are three Muslims who have failed in this job before, which reduces your chances of being accepted into the job, but the reality is that success in the job was not related to gender, nationality or religion (there is no data on the occurrence of such Kind of bias, but the example here is hypothetical for illustration only).

But the biggest problem is the human view of what terms such as "artificial intelligence" or "algorithms" or "deep learning" or "artificial neural network" mean, as they perceive it to be neutral, honest and dependent on science, and thus prejudice or racism against others takes on a new type. Absolutely from the assertion that we humans, in our contemporary societies, were not prepared for it.

Let's think about "Google" to understand this idea in a deeper understanding. When I ask you why you use it, you will usually answer that it is easy, or that you do not know anyone else, or that it is "accurate". This last remark is important, because people really think that Google search results are neutral in the absolute, Through it, you can get an accurate description of a phenomenon, search for example for "water" or "volcano" or "bipolar disorder". Usually you will meet with accurate and scientific data on the matter, but what if you decided to search (7) in the pictures for "Latin Girl or Asian Girl, or a US citizen decides to search for “Black Girl”?

Here the results will be more biased towards pornographic content compared to searching for "American Girl", for example. Does this mean that Latina girls, for example, are notorious?

Of course not, as well, the algorithm did not intend that, but it just worked - through feedback loops - to anticipate your desires to search for pictures and try to match them, but for someone accustomed to searching for pictures these results might give him an impression that girls are from Latin America, in general. It is notorious, which in turn may perpetuate racism.

Well, the algorithm asks “What is success?”, And it works by answering to get the best possible result, but it doesn't ask: What's fair?

What is privacy?

Who deserves support more than another?

What is the moral thing?

What is the truth?

What is transparency?

As long as these last questions - for centuries - have been the subject of a philosophical and social debate, many answers have emerged, and large schools of thought have emerged that support one answer at the expense of the other, and now in the era of big data we reconsider these answers very critically, because their impact extends to what is Deeper than just a Facebook or Twitter account or a company’s employee selection algorithm, it extends to shape our entire human community.

“Those who live by numbers, die by numbers,” said Luciano Floridi, the eminent information philosopher from Oxford once (8), indicating that big data and its algorithms are not just something in our life, but rather a state in which we live within and whose influence cannot be escaped no matter what happens. It's the closest thing to water in relation to the fish, and the biggest problem is that we don't know anything about it. We use the app on the smartphone or use our Facebook account or we use the teachers' rating algorithms for "best" and "worst" to save time and effort without knowing how to work. With ignorance, prejudice can easily find its way.

Currently, a large team of experts in the fields of big data is working on several attempts to develop mechanisms for building algorithms to avoid bias. This is indeed possible with an appropriate amount of time and effort spent on it, or at least with an initial awareness of the existence of problems of this kind in algorithms that usually feature As it is "objective" and "neutral", we do not fully concede to it.

__________________________________________________

Sources

  •  Weapons of Math Destruction - Cathy O'Neil

  • Positive and Negative Feedback Loops in Biology

  • Predictive policing algorithms are racist.

    They need to be dismantled.

  • Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification

  • Dissecting racial bias in an algorithm used to manage the health of populations

  • Stereotype Threat Widens Achievement Gap

  • Algorithms of Oppression: Safiya Umoja Noble

  • The Fourth Revolution: How Does the Information Cover Reshape Human Reality ?, Issue: 452, Written by: Luciano Floridi, translated by: Louay Abdel-Majid Al-Sayed.