Since the attacks on mosques in Christchurch on March 15, New Zealand's prime minister Jacinda Ardern has become the most well-known face in the fight against terrorist propaganda on the Internet. With the so-called Christchurch Call Ardern now wants to initiate a global initiative against terrorist content on the Internet.

On Wednesday, she announced that she wanted to work with France to ban terror and extremism from social networks. Together with French President Emmanuel Macron, she convened a summit with politicians and the heads of tech companies in Paris on 15 May.

"It is important that technology platforms such as Facebook are not perverted as a counter-terrorism instrument, but become part of a global solution to the fight against extremism," Ardern said. "This meeting provides the opportunity for an act of unity between governments and technology companies."

Facebook as a multiplier

The meeting will take place on the sidelines of the informal "Tech for Humanity" meeting of G7 digital ministers, chaired by France. France's "Tech for Good Summit" will also be held in mid-May.

The assassin of Christchurch had no right to stream live the murder of 50 people, Ardern. It would have to be "new, concrete measures" are taken, so that something like in Christchurch not happen again.

The assassin had transmitted his terrorist attack and the trip to the places of attack for 17 minutes live on Facebook, later spread from the recording numerous copies and screenshots on the Internet.

Ardern also said that she had background discussions with various tech companies such as Facebook, Twitter, Microsoft or Google. Also with the Facebook boss Mark Zuckerberg she spoke. A Facebook spokesperson said the company was looking forward to working with government, industry and security experts on a clear set of rules.

Stricter rules against digital terror content

Tech platforms around the world are under pressure to take greater action against terrorist content and other acts of violence. Shortly after the Christchurch attack, New Zealand not only enacted stricter gun laws, but also passed legislation that forces tech companies to immediately remove terrorist content from their platforms-otherwise, corporations face severe penalties.

In Europe, a draft for a new EU regulation on the suppression of terrorist content was published on the Internet in autumn 2018. The draft, recently voted on by the EU Parliament, requires platforms to erase terrorist material within one hour of receiving a request from Europol or national law enforcement agencies. After the European elections at the end of May, the trilogue negotiations between the European Parliament, the EU Commission and the Member States will follow in the Council.

At present, Sri Lanka is using drastic measures such as blocking networks against the spread of digital violence - after the terrorist attacks on the Easter weekend, the country had again blocked social networks such as Facebook, WhatsApp and Instagram.

Platforms such as YouTube or Facebook are already using moderator teams as well as automated filtering systems to detect terrorist content. The cross-company Global Internet Forum to Counter Terrorism initiative (GIFCT), set up in 2017, had shared "digital fingerprints of more than 800 visually disparate videos" on the attack and "URLs and information" on how to tackle the shoot after the Christchurch attack , The initiative includes Facebook, Twitter, Microsoft and YouTube.

Impossible to ban all terror content

Nevertheless, it is impossible to completely ban all recordings of terrorist attacks from the Internet. Even in the case of the Christchurch assassination, the platforms deleted a lot of content - but Facebook has removed in the first 24 hours after the attack about 1.5 million videos, more than 1.2 million clips when uploading.

But users are always uploading new recordings, in part to file-sharing platforms that are beyond the reach of the authorities. Especially livestreams, which the New Zealand Prime Minister Ardern wants to focus on, are difficult to monitor. Facebook CEO Sheryl Sandberg said last month that the company wanted to set restrictions on who should be able to broadcast live on its platform.