AI may discriminate against you December 16th 15:28

The issue of the job information website “Rikunavi” selling the student's undecided decline rate to companies attracted interest in user privacy issues.

But what will happen if each company decides whether to adopt AI or AI based on student data? The discussion of “AI bias” has begun in the United States as AI is being used in various fields. What has emerged from that is the sense of crisis that AI may reproduce social discrimination.
(International Department reporter Taichi Soga)

What is “AI Bias”?

“AI must be fair to everyone”
(Microsoft executives)

A symposium on AI was held in September in Silicon Valley, USA, where many technologies have been created.

The executives of prominent companies who were on the stage were complaining one after another about the issue of “bias by AI”. What does “bias by AI” mean? In AI, computers automatically perform analysis based on specific laws and patterns found by machine learning of large amounts of data that we have so far.

However, if there is a bias in the data, the judgment made by the AI ​​“algorithm” (a computer program for AI to process the data) may result in bias and bias.

In other words, the “fairness” of data that AI learns is questioned.

Results of corporate practices reflected in the data ...

For example: Amazon.com, an online mail order giant, ceased operation last year because its HR recruitment system using AI was "discriminated against women".

Amazon had AI learn data from the past 10 years, such as resumes and hiring possibilities, to determine who HR should interview. However, the proportion of staff hired in the past was overwhelmingly male, and it is believed that AI determined that “women are not suitable” for IT-related occupations.

What was wrong in this case?

The AI ​​program itself produced the necessary results based on the learned data. Nor does it mean that the data itself was wrong. However, this data reflects the “human resource culture” that has so far adopted many men, and the result is a lack of fairness.

“Diversity” in data

As AI is increasingly used, the demand for human resources who can design AI algorithms is increasing rapidly.

On the other hand, in this industry, the ratio of males and whites is particularly high. There is also a movement to correct that balance.

In September, a panel discussion was held in New Orleans, Southern Louisiana, to consider the race balance in the IT industry. New Orleans is a town where blacks account for 60% of the population, and technology-related industries such as AI are growing rapidly.

Sabrina Short, who participated as a panelist, is based in New Orleans, where black IT technicians hold seminars to improve their skills and provide job information, so there is a problem of racial balance in the technology industry. I have an activity to appeal. What is the current situation? When I asked Mr. Short, he gave me an example.

Mr. Short “There was a game that used facial recognition technology to predict what would happen if you get older. When I tried it, the result was“ a blonde and a white woman ” This means that we did n’t learn enough facial data. ”

In other words, the AI ​​used in this game learned the face data of white elderly people, but did not fully learn the face data of black elderly people. In other words, the original data may have been biased.

Mr. Short “Technology such as AI is now made without much consideration of various skin colors, voices, races, etc. Considering a wider range of data, examples, races, etc. Silicon Valley is now facing these issues and working to solve them, but we want to create a fair environment that takes into account everyone before that happens. is"

Discrimination is reproduced by AI

The dilemma is that data cannot be escaped from social “bias” as long as it is extracted from our society.

We visited Assistant Professor Angel Christine at Stanford University to study how to deal with this issue. Dr. Christine pointed out the dangers of designing AI based on biased data:

Assistant Professor Christine “For example, in crime matters, police officers have a prejudice against blacks and low-income people. And when AI is designed based on such data, AI is in the area where blacks and low-income people live. I think it would be better to deploy more police officers, and if more police officers were deployed and the number of arrests increased, AI would learn this data further. If it is biased, the current discriminatory situation will be reproduced by AI. "

AI reproduces discrimination. How can we avoid that situation?

Dr. Christine “I think we have to think seriously about the“ unexpected results ”that AI might produce. If people who use AI systems in the field are involved from the design stage, what happens too? I think we can avoid creating a system where we don't know. "

Data bias is social bias

In Rikunabi's problem, the probability of a student declining a job offer was predicted in five stages and sold to companies. According to Rikunabi, these data are not used for student pass / fail judgment.

However, it is no wonder that these data are used in corporate recruitment activities. At that time, if the AI ​​has determined that AI is “Students at University A have a high probability of declining the job, so it is better to reduce the number of jobs offered to students at University A”.

As labor shortages become serious, operational efficiency and productivity improvement are urgent issues. The use of AI is progressing without waiting. However, through this interview, I felt that I had to pay full attention to the “unexpected results” that AI might bring.

If data is a mirror that reflects society, can we say that our society is fair to sexual minorities, people with disabilities, and foreigners? In order to prevent such a society from being reproduced by AI, I felt that it would be indispensable for us to grasp the “bias” of society accurately in order to further use AI.

International reporter Taichi Soga
Joined in 2012. After working at Asahikawa, etc., studied the future of media as a visiting researcher at Stanford University in the International Department.