Analyzing CVs automatically, evaluating body language and facial expressions in job interviews - algorithms have been able to do all of this for a long time. Artificial intelligence (AI) is therefore being used more and more frequently in human resources. In applicant management, it has the potential to save a lot of work, to judge people more fairly and thus to ensure more diversity, say its supporters. But there are also dangers: software programs repeatedly made headlines that discriminate against people based on their evaluation by algorithms. In a test run of a software for career opportunities by the Austrian labor market service, women were assigned fewer opportunities based on their gender alone. One of the reasons for this was that they had more gaps in their résumé due to care periods.Other programs acted racially or did not even recognize the faces of black people. Instead of contributing to more diversity, programs reinforce patterns of discrimination.

It doesn't have to be.

What is important is what AI can and cannot do, says lawyer Victoria Guijarro Santos from Netzforma, an association for feminist internet politics.

"AI discrimination doesn't come out of nowhere," she says.

Judgments by algorithms are not objective, but the result of decisions made by the developer.

AI also makes mistakes.

Applications would often be used if they were 85 percent accurate.

This means that up to 15 percent of the decisions are wrong.

"Even artificial intelligence can only map limited knowledge," she says.

"In some situations the programs should call in human decision-makers"

Jessica Heesen is convinced that there is another way. She is a media ethicist at the University of Tübingen and is doing research on ethical issues relating to artificial intelligence in the “Platform for Learning Systems” project. So that AI does not discriminate, it has to be developed and used responsibly - and that poses a number of hurdles. According to Heesen, the first lies in the data sets with which AI is trained. Minorities are not always represented in it; sometimes the labeling of the data is racist or sexist. In addition, transparency is important as to the basis on which an AI makes decisions. "The algorithms make evaluations, and they should correspond to our values."

One of these values ​​should be gender equality. In order to reduce discrimination here, according to Guijarro Santos, one has to distinguish between formal and material equality. Formal equality is equality on paper, material equality tries to break down barriers. As far as gender is concerned, material equality is legally stipulated in Germany; the Basic Law “works towards the elimination of existing disadvantages”. An application like the one used by the Public Employment Service in Austria, which rates women worse because, according to the data, they were less successful in the past, would contradict this. "Such applications fail to view women individually as opposed to the crowd," says Guijarro Santos. Instead of overcoming prejudice, AI reproduces discrimination.“This problem will always be there as long as we try to use data from the past to predict the future,” she says.

Advance by the EU

If an AI breaks the law, you can at least take legal action. But that is not enough for Victoria Guijarro Santos: "Measures have to be taken beforehand," she says. “Technical solutions alone are not enough.” Jessica Heesen believes that companies should make mandatory labeling when they use AI. Quality standards are also important. "In some situations, the programs should call in human decision-makers," she says. However, they would then also have to be better trained in how algorithms work so that they are able to argue against AI decisions. Heesen also suggests certification. For this purpose, applications should be checked on the basis of criteria such as training data quality, data protection, robustness against attacks, transparency or freedom from discrimination.“That would give everyone more security,” she says.

For them, a first step towards better regulation of AI is a move by the EU. It classifies most of the labor market applications as “high risk” applications. Particularly high requirements should apply to these, among other things they should be listed in a central register. “That is also correct, because programs like this can, under certain circumstances, decide entire careers,” says Heesen. If quality standards are adhered to, AI can reduce discrimination. It could be used to suggest many women or people with a migration background for certain positions. A software like that of the Austrian labor market service could not evaluate gaps in the résumé negatively, but reward care times.