Facebook has been widely criticized for publishing a live video of the Christchurch terrorist bombing in New Zealand, without stopping the broadcast or blocking the video fast enough.

In a report published in France's Le Monde newspaper, writer Martin Antersinger said it was "strange that Facebook is rushing to withhold a picture of Ardavi and delay in blocking video of a terrorist operation broadcast live." Entersinger published the comment on the official page of the French television station Teleé Monte Carlo on Facebook.

He added that the executor of the Christchurch terrorist operation published a video documenting the armed attack carried out in mosques that killed 50 people, at his expense on Facebook, which made the site vulnerable to accusations at a time when the hate speech of the extreme right echoed on social networks .

But why should Facebook be able to discover the video that the terrorist had broadcast on time? Why did not he delete him directly, knowing that he cancels daily photographs of naked women?

The author said that Facebook has the best talent in the field of artificial intelligence in the world, which allows him almost to delete most of the contents published by the organization of the state even before it is published. To answer these questions, we must return to the limits of artificial intelligence and the way Facebook manages terrorist content.

Photos that are current or unpublished?
Facebook actually faces two types of terrorist content, the content Facebook gets in advance, such as the video of the Christchurch massacre after the attack, and propaganda videos published by the state.

Facebook, along with other social networks like Twitter, shares a database of videos of this kind. This means that with the video being published by the user, Facebook is directly conducting a comparison of its terrorist videos in the database, analyzing each pixel separately.

New Zealand's Prime Minister Jacinda Ardran (Getty Images)

The writer explained that Facebook does not define the phenomenon of terrorism as humans do, but only monitors whether a new video pixel corresponds to the pixel video previously viewed by the site. This process allows it to prevent the video from being published if it contains copies of the terrorist video content recorded in the database. Facebook also uses this method with videos that call for extremism and videos that display content related to child sexual harassment.

However, Facebook admitted that about 300,000 copies of direct videos with terrorist content could be posted on its network.

But why did not this method be used with the video published by Christchurch terrorist during the hours and days after the operation? Especially as this incident raised many questions that Facebook must answer.

Sometimes Facebook intercepts videos of terrorist content that he has never seen before, and appears on its pages for the first time, like live images of the Christchurch terrorist operation that was broadcast live. This means that the above mechanism is not in line with the content of Christchurch's video, because the video guarantees new content. Moreover, Facebook relies heavily on the complaints of its users. According to Facebook, none of the users who watched the video reported its terrorist content during its live broadcast.

Facebook needs to train artificial intelligence algorithms to distinguish terrorist videos while broadcasting directly on its platform (Anatolia)

The limits of automatic moderation
To counter this kind of video, can Facebook train artificial intelligence algorithms not to publish a terrorist video on the direct, just as it does with women who publish direct clips with bare breasts? In fact, Facebook can prevent the dissemination of this type of live video, but sometimes it can not if the content of the video is of a technical nature or aims to raise awareness of some diseases.

The problem seems to be mainly about artificial intelligence itself, which still lacks intelligence, especially as content detection algorithms work best when dealing with bare breasts compared to terrorist content. This is mainly because, in our daily lives, the new baby can recognize the breast, but is unaware of Abu Bakr al-Baghdadi's speech.

"It is technically possible to detect breast images by using the mechanism we talked about in advance, specifically through pixels and color. But how can a mechanism be established to identify terrorism since this phenomenon lies in the mind of the publisher or author of the video, in addition to the context in which it is published?

Unlike terrorist content, a video recording a murder can be automatically identified by video content per se, similar to weapons and gunshots, but these are not alone enough to make the video promote terrorist content.

The author said that automatic moderation software can not yet determine the aspirations of the video publisher and does not prevent the broadcast of images proactively except the prohibition of all types of images or content that are consistent with terrorist aspirations or to launch a terrorist act, but randomly.