The leader in live video game broadcasting has struggled for months to stem a wave of racist and homophobic harassment, which consists of "hate raids" - "hate raids" - against certain creators, in particular non-people. women or the LGBTQ community.

Stalkers come to their victims' chat windows, and flood them with insults or shocking images - like swastikas when the player is Jewish, for example.

If the creator bans them, some still come back by creating a new account.

The new tool, dubbed "Suspicious User Detection," "is there to help you identify those users based on certain signals (...) so you can take action," Twitch said in a statement.

This program, based on so-called "machine learning" technologies (automated software learning, a category of artificial intelligence), will distinguish "probable" and "possible" fraudsters.

In the first case, their messages will not appear in public, only the player and his moderators will see them.

It is up to them to then decide to monitor them or ban them.

"No machine learning system is 100% reliable", however specifies Twitch, "This is why (the tool) does not automatically ban all potential fraudsters."

The platform is owned by tech giant Amazon, which largely dominates the global cloud industry (remote computing).

It claims to receive more than 30 million visitors per day.

Last August, players mobilized to call on society to react to the raids.

Twitch launched new tools and also lodged a complaint against two users, who, according to it, manage multiple accounts on the platform from Europe under different identities and are able to "generate thousands of bots (automated computer programs, editor's note ) within minutes "for the purpose of harassing their victims.

© 2021 AFP