An assassin abused social networks as a propaganda platform in the recent terrorist attack in New Zealand, in which at least 49 people were murdered.

He streamed his attack on a mosque in Christchurch live via Facebook. On his now deleted Twitter profile, he also spread more images, such as the murder weapon. For this he posted download links to an 87-page manifesto in which he justifies his act with far-right ideologies.

The New Zealand police is now working on "removing any material," as she announced on Twitter. It also says, "The police are aware that there are extremely disturbing shots related to the Christchurch incident circulating online." The investigators also urge Internet users to stop sharing the link to the video.

New Zealand Prime Minister Jacinda Ardern also supported the call by the authorities. The "act of violence" should be offered no room. The New Zealand law professor Alexander Gillespie of the University of Waikato warned that the distribution of the video could instigate imitation offenders.

Always new video copies

Meanwhile, Twitter has deleted the alleged assassin's profile. And also Facebook has according to a tip of the New Zealand police profiles of Facebook and Instagram removed, as well as the almost 17 minutes long live stream, which could also be viewed in retrospect.

"Our hearts are broken because of the terrible tragedy in New Zealand," reads a statement from YouTube. "We will work vigilantly to remove violent footage."

Although the large platforms have deleted numerous video copies over the last few hours, the offender's material is still searchable on social networks and on video platforms. If you are looking for it, you will come across copies of the livestream video, video clips, photos and screenshots documenting the attack.

Also algorithms and moderators make mistakes

Basically, users can report material from the terrorist attack that they discover online directly to the platforms, and algorithms and moderator teams also scour social networks.

But filtering content can be flawed, as a study by the Counter Extremism Project (CEP) published last year shows. During the three-month period of investigation, IS supporters succeeded in uploading more than 1,300 terrorist videos on YouTube, despite its filtering mechanisms. Twenty-four percent of the videos remained available for over two hours, and 76 percent were deleted after two hours at the latest.

Always new copies

Ryan Mac, a tech reporter on Buzzfeed, says on Twitter that videos from the Christchurch attack on YouTube are sometimes not completely removed, but labeled as problematic and restricted: users could still view them after a warning. "How can the transmission of mass murder not be a violation of the Terms of Use?" Said Mac on Twitter.

Banishing terrorist propaganda altogether from the Internet is considered impossible - also because users in social networks are constantly uploading new material or sharing new links to other platforms that are less rigidly deleted.

In addition, as the case of the "0rbit" data leak in Germany at the beginning of the year showed, problematic content is often mirrored on platforms whose servers are located in countries that are beyond the reach of investigators from abroad. The content then stays online for a long time on these servers. "Notice and take-down procedures often do not work," said the lawyer Peter Hense SPIEGEL the German data leak, "so if you inform hosters about violations, so they remove the content."

Upload filter for the police

Last year, the EU Commission called for a technical solution to curb the spread of terrorist content. The upload filter is intended to automatically detect criminal content such as terrorist videos. Such a system reconciles content with a database that identifies hashed files already classified as terrorist, a type of digital fingerprint of files.

In addition, researchers in EU projects such as "Tensorflow" are working to detect terrorist videos that have not previously been recorded in a database.

However, upload filters are controversial, as they are considered unreliable and there is a risk that their use of freedom of expression is limited. Currently, upload filters are also heavily criticized in the context of the planned EU copyright reform.