Google (illustration). - MC / 20 MINUTES

Google explains that it has worked for several years on the algorithm used by Google Trad in order to limit the a priori in the results given by the translation tool. The Californian giant was particularly interested in gender bias and had already made a first series of modifications to its artificial intelligence. On its blog, the company announced on Wednesday a further improvement of the system.

This is based on "a better approach which uses a totally different model to eliminate the a priori linked to gender". The machine learning program that allows Google Trad to work will indeed "correct or rewrite the initial translation" if necessary. This way of proceeding has the advantage of being scalable, welcomes the company. It is now used for translations into English of content entered into Finnish, Hungarian, Persian and Turkish.

The issue of gender neutral languages

The results proposed by the intelligent tool during these translations now take gender into account. This is a problem frequently encountered during work between a gender neutral language and another with masculine and feminine genders. By feeding on external data, Google's algorithm has tended to draw inspiration from prevailing gender stereotypes in the past.

When the source language did not specify the gender, Google Trad provided for example a Turkish translation on the assumption that a person practicing the profession of doctor was a man, but chose the feminine for the profession of nursing. The engineers had however corrected this type of bias in a version of the tool released in December 2018 which proposed the results in both genres when the initial content was neutral.

Society

Sexism: Nine in ten people around the world have prejudices against women

By the Web

"The digital forgotten": Digital is "a universe designed, programmed and installed by men", explains Isabelle Collet

  • Language
  • Google
  • Artificial intelligence
  • High-Tech
  • Translation