French researcher Camille François is a reference on the international scene in the fight against fake news. - Mark Lennihan

  • Every Friday, 20 Minutes  offers a personality to comment on a social phenomenon, in their weekly meeting "  20 Minutes  with ...".
  • Based in the United States, French researcher Camille François tracks false information online, especially for Silicon Valley platforms, university laboratories, NGOs or think tanks.
  • In recent weeks, it has uncovered numerous disinformation campaigns linked to the Covid-19 crisis. "There has been an unprecedented amount of false information around the coronavirus, illustrating the virality of disinformation in crisis situations," she explains.

She is considered across the Atlantic as a "war heroine". Little known in France, researcher Camille François is a reference on the international scene in the fight against fake news and disinformation campaigns on social networks. An expert in the field of data science , the 30-year-old French woman living in the United States joined the MIT Technology club of "35 best innovators under 35 years old" last year and is included in the ranking of the "100 rising stars that will change the world" of Time magazine .

Passed by Mozilla, then by Google and its incubator Jigsaw, where she helped to develop the strategy of the Internet giant to fight against violent speech on the Web, Camille François joined in 2018 Graphika, a company specializing in analysis of influence on social networks. Trolls, bots, false information and conspiracy theories: the French researcher and her team have uncovered in recent weeks numerous disinformation campaigns related to the coronavirus crisis. 20 Minutes  questioned the researcher, who returned to the main issues related to the fight against false information, and how to fight against this scourge that threatens our democracies.

We call you the "troll hunter" because you spend your days studying fake Facebook or Twitter accounts. What does your job actually consist of?

My work is fascinating. To summarize it briefly, I analyze, I investigate, I detect and I carry out research on cyber wars, online harassment and disinformation campaigns carried out on social networks. Within Graphika, I supervise a team of researchers. We analyze how conversations happen on platforms, how information flows between them thanks to  machine learning [artificial intelligence technology]. We then do investigative work to detect information manipulation campaigns using in particular the digital forensics method . To do this, we are creating new tools to make analyzes and detections faster and more efficient thanks to data science . And we work in research and development with researchers and scientists by applying the principles of  network science .

✒️We wrote about viral SMS-based disinformation campaigns in the context of # COVID19. While we've seen cases of foreign actors doing this (ex. IRA 2014, #ColumbianChemicals), the last instance we've investigated (rumored to be Iranian IO) was actually from US-based pranksters. https://t.co/Xh3jY2noMK

- Camille François (@camillefrancois) March 18, 2020

It was you who exposed the Russian interference campaigns in the 2016 American presidential election ...

We were indeed fortunate to work with the United States Senate to analyze data from social networks as part of the investigation into Russian interference during the 2016 presidential election. It was a unique project, somewhat special, downright crazy! Senators came to knock on our door because they had obtained a lot of data from major platforms [Facebook, Twitter, Youtube…] but did not know how to analyze it. They needed independent and expert research to find out what had happened in 2016, and what had been the role of Russia.

We went to Washington to recover an encrypted hard drive with a mountain of data. We worked on it for 8 months, revealing the presence of a gigantic "troll factory". And we were able to establish that this disinformation campaign was much more global and that it had not only targeted the United States, but also France, Germany… And it is also the first time that we realized that these trolls were using platforms against each other - like a cat and mouse game - to increase their influence. Looking at all of this data together gave us a better understanding of how much better the defenses had to be between platforms.

What are the different types of fake news disseminated on social networks?

When we analyze disinformation campaigns, we use a framework that I call the " ABC  framework ". There are three vectors that can define a disinformation campaign. An info can be a fake news because of the actor (A) who is behind the campaign. It can also be linked to the way in which the information is disseminated, to behavior, the "  behavior  " (B). These are all amplification or coordination techniques that make false information, such as the use of bots to make believe in a spontaneous organic conversation.

And finally, the content (C) itself can be fake news. Before the coronavirus crisis, it was the least used vector by social networks which preferred to intervene in moderation when conversations were manipulated, but refused to be the arbiter of the contents. In recent weeks, and the proliferation of false information on the Covid-19, the platforms are much more aggressive and delete more and more content. It is a very clear strategic change on their part. This shows that the crisis we have just experienced has really changed the situation in the way of fighting fake news.

The coronavirus crisis has shown the vulnerability of our democracies to the danger of false information. What have been the main disinformation campaigns in recent weeks?

There has been an unprecedented volume of disinformation around the coronavirus, illustrating the virality of disinformation in crisis situations. And there were four major subtypes of really problematic content. First an explosion of racist and anti-immigrant content related to the coronavirus. We see this very clearly on a global scale in our analyzes of global online discussions related to Covid-19. This is the case for example in the United States, Italy, France but also in India. There has also been a sharp increase in conspiratorial content. It is very interesting because we find the classic conspiracy theories which finally gained a lot of ground with the health crisis.

We also noted a lot of content on medical misinformation around figures, including accounts that disseminated their own calculations, graphs and Excel tables, which for us is a real marker of the lack of confidence in scientific authorities. And then of course, there was all the fake news which rests on the misinformation concerning the possible remedies against the virus, like colloidal silver, artemisia, fennel…

And as @Forbes highlighted, the report shows right-wing accounts dominate the COVID-19 conversations both in numbers and volume in the US, Italy and France for the January-March period. https://t.co/xP8cri8j0R

- Camille François (@camillefrancois) April 23, 2020

Did it affect all media, and all countries?

The set of tools people use to communicate has spread fake news. No platform is resistant to disinformation. What we have observed, however, is that false information manifests itself differently on each medium. Some platforms have been very vulnerable to disinformation campaigns, such as Facebook or Twitter. Others have been more robust, such as Wikipedia which has done very well because the platform is based on practices and a network of seasoned volunteers with conspiratorial content… This shows that the fight against fake news is not inevitably a financial question, it is also and above all a cultural and structural question.

Globally, no country has been spared from fake news, it was noted in all local conversations. Italy, for example, very quickly had a very strong and very active online conversation on the coronavirus. Even countries in which we see public and health authorities having quite a bit of influence on the conversation - like Canada - have not been spared. Unfortunately, there are no borders in the spread of disinformation.

Bill Gates, “voodoo doll” of conspirators on the Internet https://t.co/huk5C3b22F

- 20 Minutes (@ 20Minutes) May 17, 2020

Many accounts have recently been created in France on Twitter and Facebook to defend the sometimes controversial positions of Professor Didier Raoult. How do you know if this is a spontaneous reaction, or fake accounts created to influence public opinion?

Much has been said about Professor Raoult here in the United States as well. As a French woman, originally from Marseille, I was even asked my opinion! The issue of false accounts and how to quantify them is something about which there is very little transparency. The data we have on this is very anecdotal because when the platforms delete false accounts, there is not really a transparency report on the nature and the volume of these false accounts.

Only influence operations coordinated with several accounts begin to be disclosed to the general public. Facebook calls this " coordinated inauthentic behavior  " and they are the subject of a monthly report. From time to time, we find quite surprising things in these reports, such as the deletion of 60 accounts related to the municipal authorities in Sète (Hérault) last March! It is, in a way, the tip of the iceberg ... On everything else, we don't really have any data. This is a field of research on which there is a lot to do, because we know that many false accounts today have a real societal impact.

During the containment, the large platforms took numerous measures to try to fight against fake news. Should we go further, and force them to act more effectively?

The platforms are doing a lot of experiments right now. But it is particularly complicated since the moderation centers had to be restructured and often closed due to confinement. Forcing them by law is something that France has explored a lot [via the law against fake news and the Avia law]. Next, there are several approaches to moderating content. The one about results constraints, like "You have to remove this in less than three hours". And that concerning the constraints of the mechanism, of the style "You must be more transparent". I think that regulation absolutely has a role to play in moderating content on the Internet. It's a very French perspective that I share, even in the United States.

On this question, I am soon publishing with a transatlantic working group a report which gives some clues on what the States have tried to do, and what we can recommend today. We studied several regulatory proposals, whether in France, the United Kingdom or Germany, and we were able to draw some suggestions for proposals on what should and should not be done. Our concern in writing this report was to find effective regulatory mechanisms that could be combined with freedom of expression.

President Macron has declared that he is "at war" with the coronavirus. But haven't we also entered another conflict, that of information?

I am very allergic to the theme of war, which suggests military approaches to these issues. Rather, we need a deeply transparent collaborative approach, anchored in human rights values. I hear this concern and the gravity of the situation, but I am always wary of this bellicose vocabulary. Fake news is a real threat to our democracies. We live today in a particular context which forces large institutions to try to understand how information is structured and evolves online.

And in this context, the vocabulary used is very important. For example, I avoid using the term "fake news", misused every day by Trump when he addresses the media. I'm trying to be more specific because there is so much difference between conspiracy communities that share things they believe in, and armies of bots that automatically share content! It is important today to separate all these concepts, and to define them precisely in order to be able to combat them.

Sciences

Coronavirus: "It's a fight for influence!" »How the infoxes forced scientists to review their communication

By the Web

Coronavirus: What is the “Desinfox Coronavirus” site set up by the government?

  • Fake Off
  • Conspiracy theory
  • By the Web
  • Internet
  • Social media
  • Fake news
  • Covid 19
  • Coronavirus