San Francisco (AFP)

Artificial intelligence is playing an exponential role in Facebook's hunt for terrorist videos or suicide apology, but the tech giant also needs plenty of flesh-and-blood experts to keep from catching up with them. many risks of scandals.

"Our efforts are paying off," Mark Zuckerberg said at a press conference on Wednesday when a bi-annual Facebook report on transparency was published, boasting about same time to have deleted 5.4 billion fake user accounts since the beginning of the year.

The social network algorithms "are now able to proactively identify about 80% of the content we withdraw", "against almost 0% two years ago," said the founder of the network giant.

But he recognizes that artificial intelligence (AI) technologies have a harder time detecting hate speech than nudity videos because of "many linguistic nuances".

That's the whole context problem: a video showing a racist attack can be shared for the purposes of condemnation ... or glorification.

For example, it's hard to know why users have tried, and are still trying, to republish the video of the killing in Christchurch. But Facebook's algorithms blocked 95% of their attempts upstream.

In all, since the attack on March 15, 4.5 million extracts of this video have been detected by the AI. The assailant, a white supremacist, had filmed live in the process of slaughtering worshipers in a mosque in New Zealand. The network took 17 minutes to stop broadcasting.

- Terrorism and suicide -

For Facebook, the challenge is to anticipate where the threat will come from, before it happens, to prevent this kind of scenario from happening again.

"We now have over 350 people in the company whose primary responsibility is to prevent members of terrorist groups from using our service," said Monika Bickert, vice president in charge of Facebook moderation.

"Some of them are terrorists, whom we hired for their expertise."

Same strategy on Instagram, which has recently tightened its regulation to fight against the circulation of contents likely to encourage the suicide or the self-injury.

The network banned pictures and then drawings and illustrations on the subject after a father accused the social network of being responsible for the suicide of his 14-year-old daughter. The teenager had, according to her father, consulted a lot of contents on this subject.

At the same time, Facebook does not want to censor people who use their services to express their emotions or call for help.

"We have stepped up the pace of our meetings with self-destruct experts at once a month," says Monica Bickert.

- Advertisers -

In all, Facebook has 35,000 people who work, internally and at partner companies, on the security and moderation of content on its platforms.

The dominant network has gone through several scandals of harmful use of its platforms, from dangerous content to misinformation campaigns during election campaigns that undermine democracy.

Since then, he has increased his efforts, especially in terms of transparency, to restore trust with its users, authorities and advertising advertisers.

The tech giant has claimed to have deleted 5.4 billion fake user accounts since the beginning of the year, against 2.1 billion last year at the same time.

In his report he explains that he "improved his ability to detect and block" the creation of "false or abusive" accounts to the point of preventing millions of attempts every day.

"The likelihood of users seeing content banned by our rules (...) is very low.When we do tests it happens that we find nothing at all," says Guy Rosen, vice president in charge of the network integrity.

According to Mark Zuckerberg, the "very low proportion of harmful content" on the platform proves that Facebook does not seek to enjoy viral content, so lucrative, as some critics accuse.

The group derives most of its revenue from advertising, but "advertisers do not want their brands to appear next to problematic content," he insists. "So if our business influences us, it is rather to encourage us to tackle these contents even more aggressively."

© 2019 AFP