Scientists from Carnegie Mellon University (USA) have found that conversations of chess fans are in some cases perceived by artificial intelligence as racist statements.

This was reported by the press service of the university.

For example, Ashikur Khudabukhsh, a researcher at the Institute of Language Technologies of this university, and his colleague, research engineer Rupak Sarkar, studied 680,000 user comments left on five popular YouTube channels about chess.

They analyzed these utterances using two modern speech classifiers.

Software that uses artificial intelligence (AI) techniques has identified hate speech - offensive, intolerant language that incites racial hatred.

As a result, the AI ​​programs found many "offensive" comments, but when people checked 1000 randomly selected messages of this type, it turned out that 82% of them were ordinary chess players' conversations about white and black pieces, attacks on each other by rival parties and methods of defense.

This is not the first time that the game of chess has been in the spotlight of the Western public in the context of the problem of racism.

So, in the middle of last year, at the height of the activity of the movement for the rights of African Americans Black Lives Matter in the United States, a scandal erupted when the Australian radio journalists unsuccessfully tried to force him to find signs of racism in chess.

At the same time, the YouTube channel of the popular chess streamer Antonio Radic was blocked without explanation after an interview with the American grandmaster of Japanese origin Hikaru Nakamura, where the opposition of black and white pieces was actively discussed.

“We do not know what tools YouTube uses, but if they rely on artificial intelligence to detect racist speech, then this is possible,” study author Ashikur Khudabukhsh said, commenting on the incident.

In his opinion, if something like this happened to a fairly well-known person like Antonio Radic and got publicized, then ordinary users are not insured against such situations, while the “censorship” will most likely remain unnoticed for the general public. 

In addition, scientists have described a similar case associated with the imperfection of artificial intelligence.

The researchers needed to "teach" the training program to differentiate dogs according to their temperament.

Thus, in a series of test images, active animals were often captured in motion against a background of green foliage and grass.

However, artificial intelligence regularly made mistakes, considering the environment as a criterion for assessing character, and noted in photographs as active, even inert dogs that were simply lying on the lawn.