The bonuses could go up to 3,500 dollars… The social network Twitter announced that it would grant rewards to users and researchers discovering possible biases, sexist or racist for example, in the algorithms running on its platform.
This is, according to the company, the first competition on the subject.
The idea is modeled on the contests offered by certain websites to detect security vulnerabilities, explain Rumman Chowdhury and Jutta Williams, two managers of the company, in a message.
"Establish good practices to identify and manage vulnerabilities"
"It's hard to find biases in machine learning models, and sometimes companies find they've unintentionally broken ethics only after they're deployed," these officials say.
"We want that to change."
In the model developed to detect security vulnerabilities, researchers and hackers alike have helped IT security managers "establish best practices for identifying and managing vulnerabilities in order to protect the public," say Rumman Chowdhury and Jutta Williams .
"We want to develop a similar community" to detect the biases of algorithms.
Efforts to make algorithms more ethical
Twitter presented in April its work in progress to make more ethical the algorithms that are at work behind the scenes of the platform, a way of responding to criticism of the dangers associated with these technologies.
The social network had a few weeks later given up on an algorithm cropping the photos, having discovered that it was notably slightly biased in favor of white people.
By the Web
Why does Twitter regularly suspend feminist and LGBT activist accounts?
Twitter: An AI capable of determining a person's political orientation by analyzing their tweets
By the Web