display

"In a marriage in which the wife is the sole breadwinner and the husband earns nothing, the algorithm automatically classifies her so that she cannot take out a loan, but her husband can." The CDU politician Friedrich Merz looked like this as if he had lost the thread when the Green leader Annalena Baerbock confronted him with this statement on the political talk show "Anne Will".

Merz's confusion about Baerbock's statement may be understandable at first glance - an algorithm that disadvantages women?

But studies have shown that women and minorities such as people with a migration history are systematically disadvantaged not only in lending, but also in application processes, in processing insurance claims and in many other areas in which artificial intelligence (AI) is used.

On behalf of the Federal Government's Anti-Discrimination Agency, the Karlsruhe Institute of Technology (KIT) came to the conclusion that almost every system based on algorithms can act in a discriminatory manner.

In the case of the lending systems that Baerbock mentioned in the political show, companies use so-called machine learning or machine learning, an application of AI.

In order for this to work and the software to be able to make a decision, a person has to train the algorithm with data.

This enables the algorithm to recognize patterns and relationships and learn from the information.

However, every person has different thoughts, opinions and ideals that flow into this data selection process - consciously or unconsciously - and can thus produce a so-called “algorithmic bias”, i.e. a biased algorithm.

display

There can be many reasons why women are disadvantaged when it comes to lending: For example, a machine learning model could have been fed with data from which it filters that men have taken out loans more often in the past and that they have taken out loans in greater amounts or earned more than female applicants.

From this, software can conclude that a man is more reliable and therefore more creditworthy than a woman - even if the woman is financially well positioned.

“This example of lending shows the immense effects that errors in algorithms can have,” explains American artist and activist Caroline Sinders.

“What if a woman wasn't married?

Then she might not be able to buy a home of her own, even though she was financially able to do so.

Something like that can set a person back extremely. "Sinders develops systems for machine learning. Among other things, she has created a" feminist data set "- an archive that now consists of hundreds of feminist texts that she has learned in workshops entitled" Feminist Data Set " collects with participants.

The first time Sinder's AI felt skeptical about it, when she asked the Apple voice assistant Siri as an experiment on domestic violence - and was confronted with a disappointment.

"I asked myself how it could be possible that you do not get any basic information on such a topic, which affects a lot of people," says the activist.

She explains this by the fact that women are underrepresented on Apple's development teams.

“For about half of what a chat robot says, it gets information from the search engines.

The other half is written by someone like a script.

If more women were involved in this process, Siri might be able to provide better information about domestic violence, as it affects women disproportionately. "

display

According to the “Global Gender Gap Report 2018” of the World Economic Forum, only 22 percent in the AI ​​industry are women.

If the developer teams who feed machine learning models with data are mainly male, it is obvious that the selected data set corresponds to a largely male reality.

Ultimately, more diversity in the industry could also contribute to a fairer algorithm.

"We have to recognize the digital space including artificial intelligence as an extension of the real world and as a mirror," writes the European Advisory Committee on Equal Opportunities for Women and Men in an opinion article on the topic of challenges for gender equality in AI.

Algorithms are like a "black box"

What distinguishes algorithms as both magical and dangerous tools is their ability to learn and make decisions on their own.

Decision-making processes are often not clearly visible, the structure of an algorithm resembles a kind of "black box" that throws out decisions without the underlying process being comprehensible.

Only at the end of the process does it become apparent that software has drawn incorrect conclusions - as in the case of an Amazon application robot, whose software has been developed internally since 2014.

The robot was more likely to sort out applications from women because more men applied, which the algorithm interpreted as the men’s greater interest in the positions.

The software could not calculate that women are underrepresented in the tech industry and therefore fewer of them applied to Amazon.

The fact that algorithms do not correctly interpret relationships can also lead them to make racist decisions.

This is shown by the example of the Twitter user HereroRocher.

In 2015, she typed “unprofessional hairstyles for work” and “professional hairstyles for work” into the Google search bar.

While Google presented their search term “professional hairstyles” blonde, thin women with straight hair in strict updos, the search engine for “unprofessional hairstyles” showed mostly black women with Afros.

Google may have scoured the internet and discovered the keywords “unprofessional hairstyle” in connection with the hairstyles featured on blogs where non-white people complained about being discriminated against for their hair.

Here you will find content from Twitter

In order to interact with or display content from Twitter and other social networks, we need your consent.

Activate social networks

I consent to content from social networks being displayed to me.

This allows personal data to be transmitted to third party providers.

This may require the storage of cookies on your device.

More information can be found here.

display

In order not to let causalities arise from correlations, algorithms should be more transparent, then developers of AI products could understand how decisions are made and adapt the algorithm.

Sinders is also in favor of regularly testing an algorithm.

“Programmers should take an intersectional approach to checking AI,” says the activist.

With an intersectional approach, she means that different categories of discrimination are considered.

“For example, the developers could ask Siri questions that affect marginalized groups and examine how the voice assistant responds;

whether certain dialects are understood or whether there is sufficient information on topics such as racial discrimination or domestic violence.

Or they enter certain words into the search engine and see what the users can see. "

With a misleading Google search, however, the problem of discriminatory algorithms does not stop, because they can endanger freedom or life.

A 2019 study by the Georgia Institute of Technology found that self-driving cars are less able to see dark-skinned people than light-skinned people.

The researchers complained that the models they used had previously been mainly trained on light-skinned people.

However, at the time of the study, the team did not have access to the latest object recognition models currently used by car manufacturers.

The investigation platform "ProPublica" found out in 2016 that a system used in the USA to help with the decision on early release from prison rated dark skin color as a decisive criterion for a high likelihood of recidivism for criminal offenses.

Feminist language assistant

The fact that the use of AI can pose a threat to equality is receiving increasing attention on a social and political level.

In 2018, for example, the Bundestag founded the Study Commission “Artificial Intelligence - Social Responsibility and Economic, Social and Ecological Potential”, which is made up of members of the Bundestag and experts and deals, among other things, with discrimination in this area.

One of their goals: to impart knowledge to companies in order to avoid future discrimination through faulty software development.

And there is also a lot going on in the tech and art scene.

In the “Art + Feminist” editing marathon, which takes place every year on International Women's Day, participants jointly write Wikipedia pages for important women in the art scene in order to give them more visibility.

Last year, the “Feminist Internet” group developed a “feminist language assistant” that does away with the stereotype of the “submissive” woman you see in assistants like Siri and Alexa.

Sinders believes that society strives to live up to the great responsibility that AI brings with it.

Nevertheless, we shouldn't rely entirely on them.

“Algorithms are not perfect.

So we shouldn't blindly trust them when making important decisions. "

Our podcast THE REAL WORD is about the important big and small questions in life: What do breast selfies have to do with feminism?

How does the long-term relationship stay happy?

And what can you learn from the TV bachelorette?

Subscribe to the podcast on

Spotify

,

Deezer

,

iTunes

or

Google Podcasts

or subscribe to us directly via

RSS feed

.

Here you can listen to our WELT podcasts

We use the player from the provider Podigee for our WELT podcasts.

We need your consent so that you can see the podcast player and to interact with or display content from Podigee and other social networks.

Activate social networks

I consent to content from social networks being displayed to me.

This allows personal data to be transmitted to third party providers.

This may require the storage of cookies on your device.

More information can be found here.