A 2021 study found that Meta's approach had a "negative human rights impact" on Palestinians and other Arabic-speaking users of its services (Al Jazeera)

The Content Oversight Board for Meta platforms on Tuesday called on the company to end its comprehensive ban on the Arabic word “martyr” or “Martyr” in English, after a year-long review found that the approach taken by Facebook’s owner was a major mistake. Its harm is “widespread,” and it unnecessarily suppresses the speech of millions of users, according to a Reuters report.

The board, which is funded by Meta but operates independently, said the social media giant should remove posts containing the word “martyr” only when they are linked to clear signs of violence or if they separately violate other Meta rules.

The ruling comes after years of criticism of the company's handling of Middle East-related content, including a 2021 study commissioned by Meta itself that found its approach had a "negative human rights impact" on Palestinians and other Arabic-speaking users of its services.

These criticisms have escalated since the start of Operation Al-Aqsa Flood in October. Human rights groups accused Meta of suppressing content supporting the Palestinians on Facebook and Instagram against the backdrop of the war that claimed the lives of tens of thousands of people in Gaza.

Meta's oversight board reached similar conclusions in its report on Tuesday, finding that Meta's rules on "martyr" failed to take into account the diversity of meanings of the word and led to the removal of content that was not intended to praise acts of violence.

“META has been operating on the assumption that censorship can improve the safety of individuals on the platform and it will, but the evidence suggests that censorship can marginalize entire populations while not improving safety,” Helle Thorning-Schmidt, co-chair of the oversight board, said in a statement. Absolutely".

Meta is currently removing any posts that use the word “martyr” in reference to organizations or people it classifies as “dangerous” on its list, which includes members of armed Islamist groups, drug gangs, and white supremacist organizations.

The company says the word constitutes a compliment to those entities it bans, according to the board's report.

Source: Reuters