Introduction to translation

The negative repercussions of false news in health domains do not stop at the psychological effects, as they extend causing direct damage up to the point of death.

Hence, we must ask the reason for the spread of this type of misinformation, is it only the nature of humans who love excitement and conspiracy thought, or do the platforms on which such news are published have a role in that disaster?

Why is Facebook reluctant to delete the accounts of celebrities who mislead people?

Why does the platform allow misinformation to spread in groups and pages where extremists meet politically?

More importantly, why do YouTube, Twitter and Facebook platforms prevent researchers in these domains from accessing all the information needed to analyze the phenomenon?!

translation material

Facebook has recently come under fire for allowing medical misinformation to spread on the platform, and some leaked internal documents have emerged suggesting that Facebook may be worse than we previously thought, and that it is deliberately turning a blind eye to this information.

This misinformation is a major concern, as one study found that participants who received their information from Facebook were more likely to resist vaccinations than those who received their information from the mainstream media.

As a researcher of social and civic media, I believe that our understanding of how misinformation spreads on the Internet is crucial, but we always find it easier said than done.

Simply counting misinformation on social media platforms leads us to two main unanswered questions: First, how likely are users to encounter this misinformation?

Second, is this information likely to have a different impact on users?

Facebook trace

In August 2020, the Avaaz Foundation (a non-profit platform concerned with human rights issues and the disclosure of misinformation) published a study stating that Facebook's algorithms responsible for detecting misinformation pose a major threat to public health, as they found that sources that shared misleading health information Frequently - about 82 websites and 42 Facebook pages - they have about 3.8 billion views per year.

At first glance the number may surprise you, but it is also important to understand that if we approach the problem mathematically and consider 3.8 as the numerator, in this case we will need to calculate the denominator to understand what this viewership means, if we consider the potential denominator is 2.9 billion, which indicates a number Monthly active users on Facebook, we will find on average that each user encounters at least one misleading piece of information from these sources.

But it is not that simple.

Email marketing researchers estimate that Facebook users spend between 19-38 minutes per day on the platform.

In this case, if we consider that 1.93 billion active users, each of whom sees the equivalent of at least 10 posts per day, we can simply calculate the prevalence of these misleading posts on Facebook, by multiplying these two numbers by the number of days in the year (365), So we get 7.044 trillion, which we will consider as the denominator of the numerator that we mentioned before is 3.8 billion views.

In simpler terms, dividing the numerator by the denominator will be 0.05%, which is the proportion of misleading publications.

When we announce that the viewership of content on these pages is 3.8 billion, this means that this percentage covers the entire content, including the useful ones as well, and based on what we have previously concluded about the percentage of misleading posts that does not exceed 1/20% (0.05%), that It might make us wonder: Should we really be concerned about the prevalence of this misinformation that every user has encountered at least once, or is it reassuring that 99.95% of Facebook posts aren't from Avaaz's warnings?

What is worth worrying about

In fact, we should not worry about any of the above, but what is really worth worrying about is the mechanism of distribution of this information, so we wonder if it is really likely that every Facebook user will encounter misleading health information at random, or is the topic originally selective, focusing on people who Those who reject vaccines or who are looking for “alternative health” information are the ones most exposed to this content.

To clarify this, another study focusing on extremist content on YouTube provided a way to understand the distribution of misinformation, by studying the algorithms through which extremist content is shown, and that content that supports white extremism, and the study authors concluded that the display of racist content is concentrated among Americans who They already harbor racist views and ethnic resentment, so YouTube's algorithms may reinforce this pattern.

Another study published by the Center for Combating Digital Hate Speech entitled “Pandemic Exploiters”, which examined 30 anti-vaccine Facebook groups, showed that only twelve celebrities took advantage of the groups to persuade people not to receive the vaccine and were responsible for about 70% of the misleading content circulating. On social media platforms, the three most popular of them accounted for nearly half of that percentage.

But - again - it is important to ask: How many anti-vaccine groups does Facebook allow to survive?

What percentage of users encounter this type of information are published by these groups?

Protesters marching towards the Facebook office in Seattle to register their protest and express displeasure over company's role in platforming Islamophobia and anti-Muslim hate in India.

#FacebookEnablesMuslimGenocide #FacebookStopTheHate pic.twitter.com/2YsrU5QGIR

— Indian American Muslim Council (@IAMCouncil) November 14, 2021

Far from even distributing mechanics and trying to find out the true proportions, these types of studies really raise important questions such as: “If researchers can actually find this content, why can’t social media platforms identify and remove it even though Facebook alone is able to Solve 70% of these misinformation problems by deleting dozens of accounts only?

This problem persisted until late August, when Facebook finally began deleting 10 out of 12 accounts of anti-vaccine activists and celebrities.

Take, for example, the American producer, Dale Pettry, who is one of the most prominent anti-vaccine activists on Facebook.

The problem of “Bigtree”, for example, is not in recruiting new followers to fight vaccines, but in Facebook users following his posts on other sites, and bringing this content to share on Facebook.

As we can see, it is not only about 12 individuals who promote misleading health information, but it goes further to reach thousands of site users who share this information with others.

So, it becomes complicated, because banning the accounts of thousands of Facebook users is much more difficult than banning a dozen anti-vaccine activists.

This is the main reason why quotient and distribution problems are essential to understanding the prevalence of misinformation online, allowing researchers to question how common or rare such behaviors are and who shares them.

If millions of users sometimes encounter some misinformation about health, the warning signs associated with these posts may be effective, but they lose their effect if the source of this information is a smaller group actively searching for and sharing this content.

Trying to understand misinformation by counting it without looking at issues of quotient and distribution is what happens when good intentions find their way only with bad tools. No social media site allows researchers to accurately calculate the prevalence of certain content on the site.

For example, Facebook imposes restrictions on most researchers, allowing them only tools such as Crowd Tangle (one of the latest search tools that Facebook launched to monitor public information and understand how information spreads), but this tool is intended for counting the number of views of content. , without proportions and method of distribution.

Likewise, most social media platforms follow the same path as Facebook, with Twitter explicitly prohibiting researchers from counting the number of its users or the number of tweets they share in a day.

YouTube is no less strict about making it more difficult for researchers to see how many videos the site receives, to the point that Google regularly asks job candidates to estimate the number of videos YouTube receives to assess their computational skills.

Recently, leaders of these platforms have claimed that their tools, despite their problems, are beneficial to society, but their argument would be more convincing if they gave researchers a chance to verify this claim for themselves.

And as the effects of social media become more pervasive and dominant, pressure will likely increase on big tech companies to disclose more data about their users and the content they share.

Assuming that these companies actually responded and allowed researchers to access more information, would they allow them to perform precise calculations to understand how the content is distributed?

And if she refuses, is it because she fears what researchers might find in the end?

________________________________

Translation: Somaya Zaher

This report has been translated from The Atlantic and does not necessarily reflect the website of Meydan.