Update your Cookie Settings to use this feature.
Click 'Allow All' or just activate the 'Targeting Cookies'
By continuing you accept Avaaz's Privacy Policy which explains how your data can be used and how it is secured.
Got it
Avaaz Report

Facebook: From Election to Insurrection

How Facebook Failed Voters and Nearly Set Democracy Aflame

March 18, 2021

Download PDF Version Back to Disinfo Hub
Facebook: From Election to Insurrection

Executive Summary

Facebook could have prevented 10.1 billion estimated views for top-performing pages that repeatedly shared misinformation 1 .

  • An analysis of the steps Facebook took throughout 2020 shows that if the platform had acted earlier, adopting civil society advice and proactively detoxing its algorithm, it could have stopped 10.1 billion estimated views of content from top-performing pages that repeatedly shared misinformation over the eight months before the US elections.

  • Failure to downgrade the reach of these pages and to limit their ability to advertise in the year before the election meant Facebook allowed them to almost triple their monthly interactions, from 97 million interactions in October 2019 to 277.9 million interactions in October 2020 - catching up with the top 100 US media pages 2 (ex. CNN, MSNBC, Fox News) on Facebook.

  • Facebook has now rolled back many of the emergency policies it instituted during the elections, returning to the algorithmic status quo that allowed conspiracy movements like QAnon and Stop the Steal to flourish.

Facebook failed to prevent a flood of followers for violence-glorifying pages and groups, many of which are still active

  • Avaaz identified 267 pages and groups - in addition to “Stop the Steal” groups - with a combined following of 32 million, spreading violence-glorifying content in the heat of the 2020 election.

  • Out of the 267 identified pages and groups, 68.7 percent had Boogaloo, QAnon or militia-aligned names and shared content promoting imagery and conspiracy theories related to these movements.

  • Despite clear violations of Facebook’s policies, 118 of those 267 pages and groups are still active on the platform and have a following of just under 27 million - of which 59 are Boogaloo, QAnon or militia-aligned. Among these we found at least three instances of content which verged on incitement to violence, which we reported to Facebook to accelerate action.

Nearly 100 million voters saw voter fraud content on Facebook

  • A poll conducted in October 2020 found that 44% of registered voters reported seeing misinformation about mail-in voter fraud on Facebook (that equates to approximately 91 million registered voters). The polling suggests that 35% of registered voters (approximately 72 million people) believed this false claim.

Top fake news posts were more viral in 2020 than in 2019

  • The top 100 most popular false or misleading stories on Facebook, related to the 2020 elections, received an estimated 162 million views. The millions of users who saw these misinformation stories before they were labeled never received retroactive corrections to inform them that what they had seen wasn’t true.

  • Although each of the 100 stories had a fact-check publicly available from an organisation working in partnership with Facebook, 24% of the stories (24) had no warning labels to inform users of falsehood.

RECOMMENDATIONS: AVAAZ’s 10-POINT PLAN TO PROTECT DEMOCRACY:

The Biden Administration and Congress must urgently work together to regulate tech platforms:

  1. Transparency: Regulation that ensures transparency, requiring large social media platforms to provide comprehensive reports on disinformation and misinformation, measures taken against it, and the design and operation of their curation algorithms. Platforms’ algorithms must also be continually and independently audited based on clearly aligned KPIs to measure impact, prevent public harm, and to improve design and outcomes.

  1. Detox the Algorithm: Regulation that ensures Facebook and other social media platforms consistently and transparently implement Detox the Algorithm policies to protect citizens from the amplification of online harms. Such regulation can change the way its algorithms incentivize and amplify content, downranking hateful, misleading, and toxic content from the top of people’s feeds. This can cut the spread of misinformation content and its sharers by 80%.

  1. Correct the Record: Regulation that ensures Facebook implements Correct the Record - requiring the platform to show a retroactive correction to independently fact-checked disinformation and misinformation, to each and every user who viewed, interacted with, or shared it. This can cut belief in false and misleading information by nearly half.

  1. Reform 230: Reform Section 230 to eliminate any barriers to regulation requiring platforms to address disinformation and misinformation.

The Biden administration must also take immediate steps to:

Build an Effective Anti-Disinformation Infrastructure:

  1. Adopt a National Disinformation Strategy to launch a whole-of-government approach.

  1. Appoint a senior-level White House official to the National Security Council to mobilize a whole-of-government response to disinformation.

  1. Prioritize the appointment of the Global Engagement Center’s Special Envoy, who is responsible for coordinating efforts of the Federal Government to recognize, understand, expose, and counter foreign state and non-state propaganda and disinformation efforts.

  1. Immediately begin working with the EU to bolster transatlantic coordination on approaches to key tech policy issues, including disinformation.

  1. Establish a multiagency digital democracy task force to investigate and study the harms of disinformation across major social media platforms and present formal recommendations within six months.

Congress must:

  1. Investigate Facebook’s role in the insurrection: Ensure current congressional investigations and the proposed Jan. 6 Commission go beyond the actors involved in the insurrection, and investigate the tools they used, including Facebook’s role in undermining the 2020 elections, and whether the platform's executives were aware of how it was being used as a tool to radicalize Americans and/or facilitate the mobilization of radicalized individuals to commit violence.

Introduction

The violence on Capitol Hill showed us that what happens on Facebook does not stay on Facebook. Viral conspiratorial narratives cost American lives and almost set American democracy aflame.

Since the election, Facebook’s leadership has tried to paint a picture that the platform performed well, suggesting that it made a lot of progress in its efforts to protect American voters and American democracy. Furthermore, it has downplayed the platform’s role in fueling the Jan. 6 violence, claiming that other smaller platforms were mainly to blame.

Yet evidence collected by Avaaz’s anti-disinformation team throughout a period of 15 months between October 2019 and Inauguration Day on January 20, 2021 tells a different story. Throughout our team’s efforts to defend democracy against disinformation and misinformation, we found that false and misleading content on the platform was surpassing the levels reported ahead of the 2016 election as early as November 2019, and the problem only got worse.

By August 2020, our team had detected hundreds of pages and groups that were repeatedly sharing misinformation, much of which was designed to polarize American society and break down trust in America’s democratic institutions. Some pages and groups were posting violent imagery or countering official health advice related to COVID-19. Centrally, our research also showed dangerous gaps in Facebook’s moderation policies that allowed false and misleading content to go undetected for days, amassing tens of millions of views.

Seeing this threat unfold in real time, Avaaz and many other civil society organizations recommended urgent policy solutions to Facebook, yet the platform refused to adopt many of these steps with the urgency required - often only taking action after significant harm was done, such as the violence in Kenosha or the viral growth of the “Stop the Steal” movement.

This report provides an overarching view of the entire election season, including key statistics that highlight the extent to which Facebook’s refusal to act in a timely and effective manner harmed America’s information ecosystem. It shows that Facebook largely failed America in its most crucial election cycle in decades, leaving voters exposed to a surge of toxic lies and conspiracy theories.

As a result, Facebook was a significant catalyst in creating the conditions that swept America down the dark path from election to insurrection. For the problem was not just the explicit calls to storm the Capitol that circulated on their platform before Jan. 6, it was also the larger ecosystem of lies and conspiracy theories that Facebook’s recommendation algorithms helped maturate for the entire year before the election, radicalizing many Americans.

As a recent study of those charged with taking part in the Jan. 6 siege shows, over half (55%) were not connected to extremist groups, but rather were “inspired by a range of extremist narratives, conspiracy theories, and personal motivations."

As lawmakers call for a 9/11-style commission on the insurrection, it shows how they would be remiss to focus the investigation on just Trump and his allies on the Hill, without taking a deeper look at the tools that empowered them. Because, even now, although Facebook has reactively taken action against some of the entities that encouraged the insurrection, the platform continues to refuse to adopt structural changes and the algorithmic transparency policies that would prevent the platform from being a major vector for misinformation and disinformation campaigns in the future. Moreover, the platform has rolled back many of the emergency policies it adopted around the election.

As a consequence, Jan. 6 may be just the beginning. Without action, misinformation and disinformation actors, and new narratives, are likely to expand again on the platform. After years of exposure to a polluted information ecosystem, America’s political immunity is already weakened. Consequently, tragic events like the one we saw in DC at the beginning of the year could become a mainstay of President Biden’s term. 

If the administration and Congress do not prioritize fixing this issue, all of their other policy priorities, from the COVID-19 response to racial justice, could be sabotaged.

American democracy remains fragile. Given there is so much at stake, President Biden and members of Congress have no time to waste. They can no longer wait for Zuckerberg to take his platform’s failure seriously. They need to push forward immediately on investigation and regulation.

Moreover, it is not only American democracy that is at risk. With important elections in Mexico, Ethiopia, Hong Kong, Germany, Iraq and elsewhere scheduled for this year, and increased economic instability due to COVID-19, the international community at large needs these democratic protections. The United States has a responsibility to the world to ensure that Facebook and other American-based tech platforms do not cause further harm.

The findings in this report will provide the White House, Congress, and policy-makers around the world with a window into what the platform could have done better.

Section One - From Election to Insurrection: How Facebook Failed American Voters

Just days before America went to the polls, Mark Zuckerberg said that the US election was going to be a test for the platform. After years of revelations about Facebook’s role in the 2016 vote, the election would showcase four years of work by the platform to root out foreign interference, voter suppression, calls for violence and more. According to Zuckerberg, Facebook had a “responsibility to stop abuse and election interference on [the] platform” which is why it had “made significant investments since 2016 to better identify new threats, close vulnerabilities and reduce the spread of viral misinformation and fake accounts.”
Mark Zuckerberg: "Election integrity is and will be an ongoing challenge. And I’m proud of the work that we have done here.”
But how well did Facebook do?

In a Senate hearing in November 2020, Zuckerberg highlighted some of the platform’s key steps:

...we introduced new policies to combat voter suppression and misinformation [...] We worked with the local election officials to remove false claims about polling conditions that might lead to voter suppression. We partnered with Reuters and the National Election Pool to provide reliable information about results. We attached voting information to posts by candidates on both sides and additional contexts to posts trying to delegitimize the outcome.

In January of this year, in an interview for Reuters after the election, Sheryl Sandberg said that the platform’s efforts “paid off in the 2020 election when you think about what we know today about the difference between 2016 and 2020.”

In very specific areas, such as Russian disinformation campaigns, Ms. Sandberg may be right, but when one looks at the bigger disinformation and misinformation picture, the data tells a very different story. In fact, these comments by Mr. Zuckerberg and Ms. Sandberg point to a huge transparency problem at the heart of the world’s largest social media platform - Facebook only discloses the data it wants to disclose and therefore evaluates its own performance, based on its own metrics. This is problematic. Put simply, Facebook should not score its own exam.

This report aims to provide a more objective assessment. Based on our investigations during the election and post-election period, and using hard evidence, this report shows clearly how Facebook did not live up to its promises to protect the US elections.

The report seeks to calculate the scale of Facebook’s failure - from its seeming unwillingness to prevent billions of views on misinformation-sharing pages, to its ineffectiveness at preventing millions of people from joining pages that spread violence-glorifying content. Consequently, voters were left dangerously exposed to a surge of toxic lies and conspiracy theories.

But more than this, its failures helped sweep America down the path from election to insurrection. The numbers in this section show Facebook’s role in providing fertile ground for and incentivizing a larger ecosystem of misinformation and toxicity, that we argue contributed to radicalizing millions and helped in creating the conditions in which the storming of the Capitol building became a reality.

Facebook could have stopped nearly 10.1 billion estimated views of content from top-performing pages that repeatedly shared misinformation

Polls show that around seven-in-ten (69%) American adults use Facebook, and over half (52%) say they get news from the social media platform. This means Facebook’s algorithm has significant control over the type of information ecosystem saturating American voters . Ensuring that the algorithm was not polluting this ecosystem with a flood of misinformation should have been a key priority for Facebook.

The platform could have more consistently followed advice to limit the reach of misinformation by ensuring that such content and repeat sharers of “fake news” are downgraded in users’ News Feeds. By tweaking its algorithm transparently and engaging with civil society to find effective solutions, Facebook could have ensured it didn’t amplify falsehoods and conspiracy theories, and those who repeatedly peddled them, and instead detoxed its algorithm to ensure authoritative sources had a better chance at showing up in users’ News Feeds.

Yet after tracking top-performing pages that repeatedly shared misinformation on Facebook 3 , alongside the top 100 media pages 4 , our research reveals that Facebook failed to act for months before the election. This was in spite of clear warnings from organisations, including Avaaz, that there was a real danger this could become a misinformation election. Such a divisive and politically-charged election was almost guaranteed to be the perfect environment for misinformation-sharing pages because their provocative style of content is popular and Facebook’s algorithm learns to privilege and boost such content in users’ News Feed.

In August, our investigations team began reporting to the platform, and sharing with fact-checkers, key misinformation content and inauthentic behavior we had identified that had avoided detection by the platform - highlighting to Facebook how their systems were failing.

It had become clear that Facebook would not act with the urgency required to protect users against misinformation by proactively detoxing its algorithm 5 . Our strategic approach was to fill the gaps in Facebook’s moderation policies and its misinformation detection methods and assisting fact-checkers in finding and flagging more misinformation content from prominent misinformers, we aimed to help ensure that Facebook’s “downranking” policy for repeat misinformers would kick in early enough to protect voters.

According to our findings, it wasn’t until October 2020, after mounting evidence that the election was being pummeled with misinformation, and with an expanded effort across civil society for finding and reporting on misinformation, that Facebook’s systems began to kick in and the platform took action to reduce the reach of repeat sharers of misinformation, seemingly adding friction on content from the pages we had identified.

Why we consider total estimated views from the top-performing pages that repeatedly shared misinformation

The Facebook pages and groups identified are not solely spreading verifiable misinformation, but also other types of content, including general non-misinformation content.

There are two main reasons that we’re focusing on the total estimated views:

  1. Misinformation is spread by an ecosystem of actors.

    Factually inaccurate or misleading content doesn’t spread in isolation: it’s often shared by actors who are spreading other types of content, in a bid to build followers and help make misinformation go viral. A recent study published in Nature showed how clusters of anti-vaccination pages managed to become highly entangled with undecided users and share a broader set of narratives to draw users into their information ecosystem. Similarly, research by Avaaz conducted ahead of the European Union elections in 2019 observed that many Facebook pages will share generic content to build followers, while sharing misinformation content sporadically to those users. Hence, understanding the estimated views garnered by these top-performing pages that repeatedly share misinformation during a specific period can provide a preview of their general reach across the platform.


  2. One click away from misinformation content. 

    When people interact with such websites and pages, those who might have been drawn in by non-misinformation content can end up being exposed to misinformation, either with a piece of false content being pushed into their News Feed or through misinformation content that’s highlighted on the page they may click on. Our methodology of measuring views is highly dependent on interactions (likes, comments, shares), but many users who visit these pages and groups may see the misinformation content but not engage with it. Furthermore, many of these pages and groups share false and misleading content at a scale that cannot be quickly detected and fact-checked. Until the fact-check of an article with false information is posted, a lot of the damage is already done and unless the fact-check is retroactive, the correction may go unseen by millions of affected viewers.

Consequently, understanding the impact these pages and groups can have on America’s information ecosystem is more accurately reflected by the overall amount of views they are capable of generating. Moreover, this measurement allows us to better compare the overall views of these pages to the views of authoritative news sources.

Specifically, our analysis shows how, from October 10, 2020, there was a sudden decline in interactions on some of the most prominent pages that had repeatedly shared misinformation. This analysis was supported by posts made by many of the pages we were monitoring and reporting, which started to announce that they were being ‘suppressed’ or ‘throttled’ (see below for examples). In contrast, the top 100 media outlet pages on Facebook remained at a steady pace 6 .


Figure 1: June, 2020 to November, 2020 (Decline in monthly interactions of most prominent pages that had shared misinformation)   Graph generated using CrowdTangle7. The graph shows the comparative interactions of the top-performing pages that repeatedly shared misinformation and the Top 100 Media pages in our study over a period of five and a half months. The clear decline in engagement takes effect on the week of October 10. The high peak on the week of November 1 to November 8 is an outlier of interactions on a few posts confirming the winner of the elections on November 7.



Post by Hard Truth, one of the pages in our study, in which they report a one million decrease in post reach. 
(Screenshot taken on Nov 3, 2020, stats provided by CrowdTangle8 at the bottom of the post are applicable to this time period)



Post by EnVolve, one of the pages in our study, in which they report a decrease in post reach. In addition, underneath the post the statistics provided by CrowdTangle show that it received 88.1% fewer interactions (likes, comments and shares) than the page’s average interaction rate.(Screenshot taken on Nov 3, 2020, stats provided by CrowdTangle9 at the bottom of the post are applicable to this time period)

What the experts say about downranking untrusted sources

A number of studies and analyses have shown that platforms can utilize different tactics to fix their algorithms by downranking untrusted sources instead of amplifying them, decreasing the amount of users that see content from repeat misinforming pages and groups in their newsfeeds.

Facebook has also spoken about “
Break-Glass tools ” which the platform can deploy to “ effectively throw a blanket over a lot of content” . However, the platform is not transparent about how these tools work, their impact, and why, if they can limit the spread of misinformation, they have not been adopted as consistent policy.

As a recent commentary from Harvard’s Misinformation Review highlights, more transparency and data from social media platforms is needed to better understand and thus design the most optimal algorithmic downranking policies. However, here are some recommended solutions:

  • Avaaz recommends downgrading based on content fact-checked by independent fact-checkers, where pages, groups and websites flagged frequently for misinformation are down-ranked in the algorithm. This is the process Facebook partially adopts, but with minimal transparency.
  • A recent study at MIT shows that downgrading based on users’ collective wisdom, where trustworthiness is crowdsourced from users, would also be effective. Twitter has announced it will pilot a product to test this.
  • Downgrading based on a set of trustworthiness standards, such as those identified by Newsguard. Google, for example, now uses thousands of human raters, clear guidelines, and a rating system that helps inform its algorithm’s search results.

Based on our observations of the impact of downranking of pages we monitored, and Facebook’s own claims about the impact of downranking content, action to detox Facebook’s algorithm can reduce the reach of misinformation sharing pages by 80%.

It is also important to note the further steep decline for both the top-performing pages in our study that repeatedly shared misinformation, and authoritative media outlets, following November 7, 2021 after the media networks made the call that President Biden had won the elections.

This steep decline likely reflects the introduction of what Facebook itself termed “Break Glass” measures, which further tweaked the algorithm to increase “News Ecosystem Quality”, thus reducing the reach of repeat misinformers. Moreover, a number of misinformation-sharing pages, which Avaaz discovered in a separate investigation were connected to Steve Bannon, were removed from the platform mid-November after we reported to Facebook that they were using inauthentic behavior to amplify “Stop the Steal” content. This removal likely also played a role in the steep decrease after Election Day, highlighted in figure 2 below.


Figure 2: March 15, 2020 - March 16, 2021  Graph generated using CrowdTangle10 (Avaaz edits in red)

The data we present above and the potential consequences for the quality of Americans’ information ecosystem in terms of the reach and saturation of content from these sources, highlight how the platform always had the power to tweak its algorithm and policies to ensure that misinformation is not amplified. Only Facebook can confirm the exact details of the changes it made to its algorithm and the impact those actions had beyond the list of actors Avaaz monitored.

Importantly, these findings also suggest that Facebook made a choice not to proactively and effectively detox its algorithm at key moments in the election cycle, allowing repeat misinformer pages the opportunity to rack up views on their content for months, saturating American voters with misinformation up until the very final weeks of the election.

According to our analysis, had Facebook tackled misinformation more aggressively and when the pandemic first hit in March 2020 (rather than waiting until October), the platform could have stopped 10.1 billion estimated views of content on the top-performing pages that repeatedly shared misinformation ahead of Election Day. Our analysis also showed that this would have ensured American voters saw more authoritative and high quality content.

Compare this number to the actions the company said it took to protect the election (below). In this light, the platform’s positive reporting on its impact appears to be biased, particularly since the platform does not highlight what it could have done, it does not highlight how many people saw the misinformation before it had acted on it, and does not provide any measure for the harms caused by a lack of early action. The lack of transparency, and the findings in the report, suggest that the platform did not do enough to protect American voters.





Infographic from Facebook to highlight the measures they were taking to prepare for election day.

It is important to note that the pages we identified spanned the political spectrum, ranging from far-right pages to libertarian and far-left pages. Of the 100 pages, 60 leaned right, 32 leaned left, and eight had no clear political affiliation. Digging further into the data, we were able to analyse the breakdown of the misinformation posts that were shared by these pages and we found that 61% were from right-leaning pages. This misinformation content from right-leaning pages also secured the majority of views on the misinformation we analysed - securing 62% of the total interactions on that type of content. What is important to emphasize is that the sharing of misinformation content targets all sides of the political spectrum, influencing polarization on both sides of the aisle, and that Facebook is failing to protect Americans from prominent actors that repeatedly share this type of content.

How the top-performing pages that repeatedly shared misinformation rivaled authoritative sources in the amount of interactions they received

Facebook’s refusal to adopt stronger ‘Detox the Algorithm’ tools and processes did not just allow these repeat misinformers to spread their content, but they actually allowed their popularity to skyrocket. According to our research, the top-performers collectively almost tripled their monthly interactions - from 97 million interactions in October 2019 to 277.9 million interactions in July 2020, catching up with the top 100 US media pages on Facebook . At the exact moment when American voters needed authoritative information, Facebook’s algorithm was boosting pages repeatedly sharing misinformation.



Figure 3: Oct 4, 2019 - Oct 4, 2020 (monthly interactions on the top-performing pages that repeatedly shared misinformation in our study)  Graph generated using CrowdTangle11. The graph shows total interactions for the 100 pages, each color represents a different type of reaction (see color code). Interactions peaked in July 2020 at almost 278 million interactions.


Figure 4: Feb 16 2016 - Dec 1 2020 (monthly interactions) Graph generated using CrowdTangle12. This graph shows total interactions on content posted by the top-performing pages that repeatedly shared misinformation in our study over roughly a four-year period. Each color represents a different type of post and is sized by the amount of interactions it received. Here we can see that photos received the most amount of interactions. We can also see clear peaks ahead of November 2016 and again before November 2020.



Figure 5: June, 2020 to November, 2020 (Monthly interactions)
Graph generated using CrowdTangle13 (Avaaz edits in red). The graph shows the comparative interaction rate of the top-performing pages that repeatedly shared misinformation and the Top 100 Media pages in our study over a period of just over five months. The period between the end of May 2020, and August 1, 2020 saw significant anti-racism protests after the murder of George Floyd on May 25, 2020, as well as a surge in COVID-19 cases.  


In fact, in terms of the amount of interactions garnered in July/August, 2020, during the anti-racism protests, the top-performing pages that repeatedly shared misinformation actually surpassed the top US media pages in terms of engagement (reaction, comments and shares on posts). This is despite the top 100 US media pages having about four times as many followers.

Our analysis of these pages shows that Facebook allowed them to pump out misinformation on a wide number of issues, with misinformation stories often receiving more interactions than some stories from legitimate news sources. This finding brings to the forefront the question of whether or not Facebook’s algorithm amplifies misinformation content and the pages spreading misinformation, something only an independent audit of Facebook’s algorithm and data can fully confirm. The scale at which these pages appear to have gained ground on authoritative news pages on the platform, despite the platform’s declared efforts to moderate and downgrade misinformation, suggests that Facebook’s moderation tactics are not keeping up with the amplification the platform provided to misinformation content and those spreading it.

On the role of Facebook’s algorithm in amplifying election misinformation

Facebook’s algorithm decides the content users see in their News Feeds based on a broad range of variables and calculations, such as the amount of reactions and comments a post receives, whether the user has interacted with the content of the group and page posting, and a set of other signals.

This placement in the News Feed, which Facebook refers to as “ranking”, is determined by the algorithm, which in turn learns to prioritize content based on a number of variables such as user engagement, and can provide significant amplification to a piece of content that is provocative. As Mark Zuckerberg has highlighted:

“One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content (...) At scale it can undermine the quality of public discourse and lead to polarization.”

A heated election in a politically-divided country was always going to produce sensationalist and provocative content. That type of content receives significant engagement. This engagement will, in turn, be interpreted by the algorithm as a reason to further boost this content in the News Feed, creating a vicious cycle where the algorithm is consistently and artificially giving political misinformation not yet flagged, for example, an upper hand over authoritative election content within the information ecosystem it presents to Facebook users.

Facebook initially designed its algorithms largely based on what serves the platform’s business model, not based on what is optimal for civic engagement. Moreover, the staggering number of about 125 million fake accounts that Facebook admits are still active on the platform are also ongoingly skewing the algorithm in ways that are not representative of real users.

Furthermore, the platform’s design creates a vicious cycle where the increased engagement with the political misinformation content generated by the algorithm will further increase the visibility and amount of users who follow the pages and websites sharing the misinformation.

In 2018, after the Cambridge Analytica scandal and the disinformation crisis it faced in 2016, Facebook publicly declared that it had re-designed its ranking measurements for what it includes in the News Feed “so people have more opportunities to interact with the people they care about”. Mark Zuckerberg also revealed that the platform tried to moderate the algorithm by ensuring that “posts that are rated as false are demoted and lose on average 80% of their future views”.

These were welcome announcements and revelations, but our findings in a previous report on health misinformation, in August 2020, indicated that Facebook was still failing to prevent the amplification of misinformation and the actors spreading it. The report strongly suggested that Facebook’s current algorithmic ranking process was either potentially being weaponized by health misinformation actors coordinating at scale to reach millions of users, and/or that the algorithm remains biased towards the amplification of misinformation, as it was in 2018. The findings also suggest that Facebook’s moderation policies to counter this problem are still not being applied proactively and effectively enough. In that report’s recommendation section, we highlight how Facebook can begin detoxing its algorithm to prevent a disproportionate amplification of health misinformation. This report suggests Facebook did not take sufficient action.

Examples of misinformation spread by some of the top-performing pages

Claims of election fraud, including this video that contains the false claim that votes were changed for Joe Biden using the HAMMER program and a software called Scorecard [00.36]. The post received 41,300 interactions and 741,000 views, despite the voter fraud claim being debunked by Lead Stories.



Claims that the Democrats were planning to steal the election , including this video purportedly showing Biden announcing that he had created the largest voter fraud organization in history. This video was debunked by Lead Stories, but despite this, this post alone racked up almost 54,500 interactions. Furthermore, Avaaz analysis using CrowdTangle 14 , conducted in addition to the research used to identify the top performing pages, showed that the video was shared by a wide range of actors outside of the pages in this study. Across Facebook, it garnered 197,000 interactions and over one million views on Facebook in the week before election day. The video was reported to Facebook and fact-checks were applied, but only after hundreds of thousands had watched it.




Candidate-focused misinformation, including the claim that Joe Biden stumbled while  answering a question about court-packing and had his staff escort the interviewers out (included in the following post, which plucks one moment of a real interview out of context, receiving around 43,800 interactions and 634,000 video views).

Politifact rated the claims as false, clarifying that: “ President Donald Trump’s campaign is using a snippet of video to show a moment when Joe Biden ostensibly stumbled when asked about court packing. The message is that Biden’s stumble was his staff’s cue to hustle reporters away. 
In reality, the full video shows that Biden gave a lengthy answer, and then Biden’s staff began moving members of the press to attend the next campaign event.


The post published on October 27 remains unlabelled despite this precise post being fact-checked by one of Facebook’s fact-checking partners.




Claims that President Trump is the only President in US history to simultaneously hold records for the biggest stock market drop, the highest national debt, the most members of his team convicted of crimes, and the most pandemic-related infections globally. This claim, for example in this post, was fact-checked by LeadStories, a Facebook fact-checking partner , which highlighted that it is not true:   “The first two claims in this post are not false, but they are missing context. The second two claims are false.”




Candidate-focused misinformation, including the claim that Joe Biden is a pedophile (included in the following post, which received 1,157 total interactions and an estimated 22,839 views, despite the claim being debunked by PolitiFact).

In addition, our research also identified multiple variations of this post outside of our pages study without a fact-checking label.





Misinformation about political violence, for example this post, originating on Twitter, claims that Black Lives Matter (BLM) protesters blocked the entrance of a hospital emergency room to prevent police officers from being treated (the post received over 56,000 interactions despite the narrative being debunked by Lead Stories).



How Facebook’s last-minute action missed prominent misinformation sharing actors

Avaaz analysis shows, even though Facebook eventually took action that decreased the interactions received by the top-performing pages that had repeatedly shared misinformation, it appears that this action did not affect all of them. According to our investigation, they completely ignored or missed some of the most high-profile offenders.

As an example of what Facebook was able to achieve when it tweaked its algorithm, consider a repeat misinformer Avaaz had detected in early September: James T. Harris. His page saw a significant downgrade in its reach from a peak of 626,860 interactions a week, to under 50,000 interactions a week after Facebook took action:


Figure 6: Aug 9, 2020 - Nov 11, 2020 
Graph generated using CrowdTangle15. This graph shows the total interactions content posted by the Facebook page of James T. Harris over roughly three months. The different colors show which types of post received the most engagement. Here you can see that photos received the most engagement.


This is one example of a fact-checked misinformation post shared on James T. Harris’ page. The post contains the false claim that the House Democrats created a provision in the HEROES Act to keep the House in recess, garnered over 3,121 interactions and an estimated 61,608 views, despite being debunked by PolitiFact.

However, another page we found having repeatedly shared misinformation, Dan Bongino, did not see a steep decline. In fact he saw a rise in engagement:


Figure 7: Jun 6, 2020 - July 11, 2020 Graph generated using CrowdTangle16. This graph shows the total interactions content posted by the Facebook page of Dan Bongino over roughly three months. The different colors show which types of post received the most engagement. Here you can see that links received the most engagement.

In fact, just 10 days before the storming of the Capitol, this page shared a speech by Senator Ted Cruz calling to defend the nation and prosecute and jail all those involved in voter fraud. This post alone garnered 306,000 interactions and an estimated 6,040,440 views.
It is unclear to us how Facebook made the decision to downgrade James T Harris, but not to downgrade Dan Bongino, despite both being repeat sharers of misinformation.


Facebook failed to prevent violence-glorifying pages and groups - including those aligned with QAnon and militias - from gaining a following of nearly 32 million 17

Facebook’s platform was reportedly used to incite the assault on the Capitol, and was used to promote and drive turnout to the events that preceded the riots.

More worryingly however, for many months prior to the insurrection, Facebook was home to a set of specific pages and groups that mobilized around the glorification of violence and dangerous conspiracy theories that we believe helped legitimize the type of chaos that unfolded during the insurrection. Many of these pages and groups may have not repeatedly shared disinformation content or built up millions of followers, and so would not have appeared in our most prominent pages analysis above, but they can have a devastating effect on the followers they do attract.

Facebook’s action to moderate and/or remove many of these groups came too late, meaning they had already gained significant traction using the platform. Moreover, Facebook again prioritized piece-meal and whack-a-mole approaches - on individual content for example - over structural changes in its recommendation algorithm and organizational priorities, thus not applying more powerful tools available to truly protect its users and democracy. For example, Facebook only stopped its recommendation system from sending users to political groups in October of 2020, even though Avaaz and other experts had highlighted that these recommendations were helping dangerous groups build followers.

The violence that put lawmakers, law enforcement and others’ lives in harm’s way on Jan 6 was primed not just by viral, unchecked voter fraud misinformation (as described above), but by an echo chamber of extremism, dangerous political conspiracy theories, and rhetoric whose violent overtones Facebook allowed to grow and thrive for years.

In the heat of a critical and divisive election cycle, it was imperative that the company take aggressive action on all activity that could stoke offline political violence. The company promised to prohibit the glorification of violence and took steps between July and September 2020 to expand its “Dangerous Individuals and Organizations” policy to address “militarized social movements” and “violence-inducing conspiracy networks”, including those with ties to Boogaloo and QAnon movements. However, a significant universe of groups and pages that Avaaz analyzed since the summer of 2020 that regularly violated the aforesaid Facebook policies was allowed to remain on the platform for far too long.

In addition to the “Stop the Steal” groups noted in the previous section, since June 2020, Avaaz identified 267 pages and groups with a combined following of 32 million that spread violence-glorifying content in the heat of the 2020 election.

Out of the 267 identified pages and groups, as of December 23, 2020, 183 (68.7%) had Boogaloo, QAnon, or militia-aligned names and shared content promoting movement imagery and conspiracy theories. The remaining 31.3 percent were pages and groups with no identifiable alignment with violent, conspiracy-theory-driven movements, but were promoters of memes and other content that glorified political violence ahead of, during, and in the aftermath of high profile instances of such violence, including in Kenosha, Wisconsin, and throughout the country during anti-racism protests. 18

As of February 24, 2021, despite clear violations of Facebook’s policies, out of the 267 identified pages and groups, 118 are still active, of which 58 are Boogaloo, QAnon, or militia-aligned, and 60 are non-movement aligned pages and groups that spread violence-glorifying content. Despite Facebook’s removal of 56% of these pages and groups, their action inoculated only a small fraction (16%) of the total number of followers from the violence glorifying content they were spreading.

By February 24, 2021, Facebook had also removed 125 out of the 183 Boogaloo, QAnon, and militia-aligned pages and groups identified by Avaaz. However, similarly, this accounted for a mere 11% (914,194) of their total followers. The 58 pages and groups that remain active account for 89% (7.75m) of the combined following of these Boogaloo, QAnon, and militia-aligned pages and groups.

How Facebook missed pages and groups that simply changed their names 

Although Facebook promised to take more aggressive action against violent, conspiracy-fueled movements in September, our research shows it failed to enforce its expanded policies against movement pages and groups that had changed their names to avoid detection and accountability, including those associated with the Three Percenters movements - leaders and/or followers of which have been charged in relation to the Jan. 6 insurrection. For example, on September 3, 2020, Facebook page “III% Nation” (which identifies with the militia Three Percenters movement ) changed its name to “ Molon Labe Nation ”. Molon Labe is reportedly an expression which translates to “come and take [them]” and is used by those associated with the Three Percenters movement. Under its previous name (“III% Nation”), the page had a link to an online store selling t-shirts with "III% Nation" imprints. Upon changing its name, the owners of the page still have a Teespring account where they continue to sell t-shirts and mugs, but the imprints have changed from “III% Nation” to “Molon Labe Nation". To date, this page is still active .

Among these pages and groups were a few particularly frightening posts including calls for taking up arms and civil war, and content that glorified armed violence, expressed support or praise for a mass shooter, and/or made light of the suffering or deaths of victims of such violence, all of which violates Facebook Community Standards.

Here are some examples:


Shared to “Swamp Drainer Disciples: The Counter-Resistance”



Post calling for an armed revolt posted on a Boogaloo-aligned page19.


Post threatening violence against local national guards posted on a Boogaloo-aligned page.


Shared by “American’s Defending Freedom”


Shared by “The Libertarian Republic”

Had Facebook acted sooner, it could have prevented these pages and groups from exposing millions of users to dangerous conspiracy theories and content celebrating violence. Instead it allowed these groups to grow.

It is not as if Facebook was unaware of the problem. In 2020, internal documents revealed that executives were warned by their own data scientists that blatant misinformation and calls for violence plagued groups. In one presentation in August 2020, researchers said roughly “70% of the top 100 most active US Civic Groups are considered non-recommendable for issues such as hate, misinfo, bullying and harassment.” In 2016, Facebook researchers found that “64% of all extremist group joins are due to our recommendation tools”.

Facebook has allowed its platform to be used by malicious actors to spread and amplify such content, and that has created a rabbit hole for violent radicalization. Academic research shows that “ideological narrative provides the moral justifications rendering violence acceptable and even desirable against outgroup members”, demonstrating just how dangerous it is for content glorifying violence or praising mass shooters to spread without consequence on the world’s largest social media platform.

Facebook is a content accelerator, and the company’s slow reaction to this problem allows violent movements to grow and gain mainstream attention - by which stage any action is too little too late. For example, by the time Facebook announced and started to implement its ban on activity representing QAnon, the movement’s following on the platform had grown too large to be contained. As pages, groups, and content were removed under this expanded policy, users migrated some of their activity over to smaller, ideologically-aligned platforms to continue recruitment and mobilization, but many remained on the platform sharing content from their personal accounts or in private groups, which are harder to monitor.

This is significant given that, according to recent reports from the court records of those involved in the insurrection, “an abiding sense of loyalty to the fringe online conspiracy movement known as QAnon is emerging as a common thread among scores of the men and women from around the country arrested for their participation in the deadly U.S. Capitol insurrection”.


Facebook failed to prevent 162m estimated views on the top 100 most popular election misinformation posts/stories

Back in November 2019 - a year before the 2020 elections - Avaaz raised the alarm on the US 2020 vote, warning that it was at risk of becoming another “disinformation election”. Our investigation found that Facebook's political fake news problem had surged to over 158 million estimated views on the top 100 fact-checked, viral political fake-news stories on Facebook. This was over a period of 10 months in 2019 and was before the election year had even started. We called on Facebook to urgently redouble its efforts to curb misinformation .

However, our investigation shows that, when comparing interactions to the year before, Facebook’s measures failed to reduce the spread of viral misinformation on the platform .

Avaaz identified the top 100 false or misleading stories related to the 2020 elections, which we estimated were the most popular on Facebook 20 . All of these stories were debunked by fact-checkers working in partnership with Facebook. According to our analysis, these top 100 stories alone were viewed an estimated 162 million times in a period of just three months, showing a much higher rate of engagement on top misinformation stories than the year before, despite all of Facebook’s promises to act effectively against misinformation that is fact-checked by independent fact-checkers. Moreover, although Facebook claims to slow down the dissemination of fake news once fact-checked and labeled, this finding clearly shows that its current policies are not sufficient to prevent false and misleading content from going viral and racking up millions of views.

In fact, to understand the scale and significance of the reach of those top 100 misleading stories, compare the estimated 162 million views of disinformation we found with the 159m people who voted in the election. Of course, individuals on Facebook can “view” a piece of content more than once, but what this number does suggest is that a significant subset of American voters who used Facebook were likely to have seen misinformation content.

It is also worth pointing out that the 162 million figure is just the tip of the iceberg, given that this just relates to the top 100 most popular stories. Also, we only analyzed content that was flagged by fact-checkers, we did not calculate numbers for the content that slipped under the radar but was clearly false. In short, unless Facebook is fully transparent with researchers, we will never know the full scale of misinformation on the platform and how many users viewed it. Facebook does not report on how much misinformation pollutes the platform, and does not allow transparent and independent audits to be conducted.

Part of the problem, identified previously by Avaaz research published almost a year ago, is that Facebook can be slow to apply a fact-checking label after a claim has been fact-checked by one of its partners. However, we have also found that Facebook can even fail to add a label to fact-checked content at all. Of the top 100 stories we analyzed for this report, Avaaz found that 24% of the stories (24) had no warning labels to inform users of falsehoods, even though they had a fact-check provided by an organization working in partnership with Facebook. So while independent fact-checkers moved quickly to respond to the deluge of misinformation around the election, Facebook did not match these efforts by ensuring that fact-checks reach those who are exposed to the identified misinformation.

Unlabeled posts among those 100 stories included debunked claims surrounding voter fraud. The post below, which remains live on Facebook and unlabeled, includes the following claim: “System ‘Glitch’ Also Uncovered In Wisconsin – Reversal of Swapped Votes Removes Lead from Joe Biden”. The post links to a Gateway Pundit article, which alleges that it “identified approximately 10,000 votes that were moved from President Trump to Biden in just one Wisconsin County”. Despite the article and claim being fact-checked by USA Today (which found that “There was no glitch in the election system. And there’s no “gotcha” moment or evidence that will move votes over to Trump’s column. The presidential totals were transposed for several minutes because of a data entry error by The Associated Press, without any involvement by election officials.”), the post garnered 271 interactions, while the article has accumulated 82,611 total interactions on the platform.




An additional problem is that Facebook’s AI is not fit for purpose. As Avaaz revealed in October 2020, variations of misinformation already marked false or misleading by Facebook can easily slip through its detection system and rack up millions of views. Flaws in Facebook's fact-checking system mean that even simple tweaks to misinformation content are enough to help it escape being labeled by the platform. For example, if the original meme is marked as misinformation, it's often sufficient to choose a different background image, slightly change the post, crop it or write it out, to circumvent Facebook's detection systems.




However, according to our research, sometimes it does not even require a minor change; whilst one post might receive a fact-checking label, an identical post might not be detected thus avoiding being labelled.



Further Avaaz research has also indicated that the fact-check labelling system is not being applied consistently. On December 3, 2020, Avaaz reported 204 misinformation posts to Facebook about Georgia and the 2020 elections, of which 60% (122 posts) contained no fact-check label despite Facebook partners flagging them.

Moreover, this research showed other nearly-identical misinformation posts continue to receive different treatments from Facebook. Below is an example of a story debunked by Lead Stories, which simultaneously appears on the platform in a post without any kind of label, in a post with an election information label, and in a post with a fact-check label.






These findings seriously undermine Facebook’s recent claim from May 2020 that its AI is already “able to recognize near-duplicate matches” and “apply warning labels”, regarding Covid-19 misinformation and exploitative content, noting that “for each piece of misinformation [a] fact-checker identifies, there may be thousands or millions of copies.” Facebook’s detection systems need to be made much more sophisticated to prevent misinformation going viral.

Another worrying failing from Facebook on the issue of viral misinformation in this election concerns retroactive corrections. Even if fake stories slipped through cracks in its policies, the platform could have taken action to ensure that everyone who saw these fake stories were shown retroactive corrections offering them factually correct information on the issue concerned . This solution is called ‘Correct the Record’ , and Facebook could have started implementing it years ago. This would have prevented voters from being repeatedly primed with viral fake news that eroded trust in the democratic process.

Again, Facebook very belatedly, and temporarily, applied a light version of corrections by including fact-checks about the elections in users’ news feeds (see screenshots below), but the platform only took this step in the days after the election, after months during which millions of Americans were flooded with election-fraud and other forms of misinformation.

Moreover, Facebook rolled back this policy a few weeks after it instituted it. The question is, again, why doesn’t Facebook commit to including fact-checks from independent fact-checkers that it partners with in the news feeds of users on the platform who have seen misinformation?





An independent study conducted by researchers at George Washington University and the Ohio State University, with support from Avaaz, showed that retroactive corrections can decrease the number of people who believe misinformation by up to 50%. 

As a result of Facebook’s unwillingness to adopt this proactive solution, the vast majority of the millions of US citizens exposed to misinformation will never know that they have been misled - and misinformers continue to be incentivized to make their content go viral as quickly as possible.

How Facebook can start getting serious about misinformation by expanding its COVID-19 misinformation policy 

Back in April 2020, an investigation by Avaaz uncovered that millions of Facebook’s users were being exposed to coronavirus misinformation without any warning from the platform. On the heels of reporting these findings to Facebook, the company decided to take action by issuing retroactive alerts: "We’re going to start showing messages in News Feed to people who have liked, reacted or commented on harmful misinformation about COVID-19 that we have since removed. These messages will connect people to COVID-19 myths debunked by the WHO including ones we’ve removed from our platform for leading to imminent physical harm".

In December 2020 this was updated so that people: “Receive a notification that says we’ve removed a post they’ve interacted with for violating our policy against misinformation about COVID-19 that leads to imminent physical harm. Once they click on the notification, they will see a thumbnail of the post, and more information about where they saw it and how they engaged with it. They will also see why it was false and why we removed it (e.g. the post included the false claim that COVID-19 doesn’t exist)”.

The New York Times reported that Facebook was considering applying "Correct the Record" during the US elections. Facebook officials informed Avaaz that although they considered this solution, other projects such as Facebook’s Election Information Center took priority and the platform had to make “hard choices” about what solutions to focus on based on their assessed impact. Facebook employees who spoke to the New York Times, however, claimed that “Correct the Record” was: “vetoed by policy executives who feared it would disproportionately show notifications to people who shared false news from right-wing websites”.

Regardless of the reasons behind why Facebook did not implement “Correct the Record” during the US elections, the platform has an opportunity to implement this solution moving forward, and to do so in an effective way that curbs the spread of misinformation. To not take this action would be negligence. Misinformation about the vaccines, the protests in Burma, the upcoming elections in Ethiopia, and elsewhere remains rampant on the platform.

Nearly 100 million voters saw voter fraud content on Facebook, designed to discredit the election

Those who stormed the Capitol building on Jan. 6 had at least one belief in common: the false idea that the election was fraudulent and stolen from then-President Trump. While Trump was one of the highest profile and most powerful purveyors of this falsehood (on and offline), the rioters’ belief in this alternate reality was shaped and weaponized by more than just the former president and his allies. As in 2016, according to our research, American voters were pummeled online every step of the 2020 election cycle, with viral false and misleading information about voter fraud and election rigging designed to seed distrust in the election process and suppress voter turnout. Facebook and its algorithm were one of the lead culprits.

Despite the company’s promise to curb the spread of misinformation ahead of the election, misinformation flooded millions of voters’ News Feeds daily, often unchecked.

According to polling commissioned by Avaaz in the run-up to Election Day, 44% of registered voters, which approximates to 91 million registered voters in the US, reported seeing misinformation about mail-in voter fraud on Facebook, and 35%, which approximates to 72 million registered voters in the US, believed it. It is important to note that polls relying on self-reporting often exhibit bias, and hence these numbers come with a significant uncertainty.

However, even with this uncertainty, the magnitude of these findings strongly suggests that Facebook played a role in connecting millions of Americans to misinformation about voter fraud. The best way to assess this would be to conduct an independent audit of Facebook, investigating the platform’s role by requesting access to key pieces of data not available to the public. This can be done while also respecting users’ privacy rights.

In addition, our research provided further evidence of Facebook’s role in accelerating voter-fraud misinformation. As reported in the New York Times, we found that in a four-week period starting in mid-October 2020, Donald Trump and 25 other repeat misinformers which the New York Times termed “superspreaders” , including Eric Trump, Diamond & Silk, and Brandon Stratka, accounted for 28.6 percent of the interactions people had with such content overall on Facebook . The influencers making up that “superspreader” list posted various false claims, some said that dead people voted , others that voting machines had technical glitches, or that mail-in ballots were not correctly counted.

This created fertile ground for conspiracy movements such as “Stop the Steal” to grow, both on and offline. On November 4, 2020, the day after Election Day, public and private Facebook “ Stop the Steal ” group membership proliferated by the hour. The largest group “quickly became one of the fastest-growing in the platform's history,” adding more than 300,000 members by the morning of November 5th . Shortly thereafter, after members issued calls to violence, this same group was removed by Facebook.

Despite the group’s removal and the company’s promise to Avaaz to investigate “Stop the Steal” activity further, we found that there were approximately 1.5 million interactions on posts using the hashtag #StoptheSteal to spread election fraud claims, which we estimate received 30 million estimated views during the week after the 2020 election (November 3 to November 10).

And on November 12, just days after the presidential election was called for Biden, we found that 98 public and private “Stop the Steal” groups still remained on the platform with a combined membership of over 519,000. Forty six of the public groups had garnered over 20.7 million estimated views (over 828,000 interactions) between November 5 and November 12.


This group called Stop the Steal 3.0: The Ghost of Sherman had over 1,300 members before being closed down. Dozens of similar groups had popped up across the country. 


It took the deadly Capitol riots on Jan. 6 for Facebook to announce a ban on “Stop the Steal” content across the platform once and for all. As our research, which was reported on by CNN, highlighted: this ban didn’t address the wider universe of groups and content that sought to undermine the legitimacy of the election - and didn’t account for groups that had been renamed (from “Stop the Steal”) to avoid detection and accountability. This included one group called “ Own Your Vote ” which we identified was administered by Steven Bannon’s Facebook page 21 .

Facebook’s piecemeal actions came too late to prevent millions of Americans from believing that voter fraud derailed the legitimacy of this election, a misinformation narrative that was being seeded on the platform for months . According to a poll from YouGov and The Economist conducted between 8 and 10 November among registered voters, 82 percent of Republicans said they did not believe that Biden had legitimately won the election, even though he had. And even after the election results were certified by Congress, polls continued to show voter fraud misinformation’s firm grip on many American voters. One poll released on January 25, 2021 from Monmouth showed that one in three Americans believe that Biden won because of voter fraud.

It is too late for Facebook to repair the damage it has done in helping to spread the misinformation that has led to these sorts of polling numbers, and the widespread belief in claims of voter fraud that have polarized the country and cut into the legitimacy of President Biden in the eyes of millions of Americans.

However, the platform can and must now proactively adopt policies such as “Detox the Algorithm” and “Correct the Record” that will prevent future dangerous misinformation narratives from taking hold in a similar way. Instead of moving in this direction, the platform has largely rolled back many of its policies, again putting Americans at risk.

Section Two - Avoiding a Year of Insurrection: Bandaids Are Not Enough, Biden and Congress Must Move To Regulate

The threat of misinformation targeting the United States and democracy has only increased since the election ended. Facebook’s steps of banning President Trump and the “ Stop The Steal ” movement will not impact the rise and reach of other misinformation superspreaders and campaigns that seek to polarize the nation, spread fear, and potentially encourage violence. In fact, efforts to hamper President Biden’s policy goals are already under way.

Recently, the Avaaz investigative team and independent fact-checkers have uncovered posts spreading false or misleading information about President Biden’s climate-related actions, including that he gave China control of the U.S. power grid and his recent executive orders have been the cause of job losses in the tens of thousands ( example post). On racial justice, our team has detected disinformation targeting the victims of police brutality, much of it unlabeled on the platform at the time, as well as content targeting Vice President Kamala Harris with claims such as that she had bailed out “violent rioters” during the anti-police brutality protests in the summer of 2020 ( example post). Centrally, these misinformation themes can quickly create broader narratives about individuals or policies that can poison the public debate, increase polarization and potentially lead to more violence.

Many of the tools Facebook deployed during the election, such as changing its News Feed algorithm to prioritize authoritative news sources, which significantly impacted the reach of serial misinformers, were rolled back in the days before the insurrection. Facebook re-instituted some of these restrictions after the violence on Jan. 6 and leading up to inauguration, but the platform has made clear they are temporary and it will roll them back again.

President Biden has highlighted that he wants to heal America, and that he wants to “ defend the truth and defeat the lies .” We admire these aspirations, but believe they will be impossible to achieve unless Facebook and other social media platforms are pressured to apply the systematic solutions outlined in this report. President Biden has a historic opportunity to heal the architecture of our digital ecosystem, and protect facts from the tidal wave of lies that have defined key political narrative over the last decade.

RECOMMENDATIONS: AVAAZ’s 10-POINT PLAN TO PROTECT DEMOCRACY:

Avaaz recommends that the new administration and Congress take immediate steps to regulate tech platforms

Support Smart, Systemic Solutions:

1. Detox the Algorithm

Social media companies are content accelerators , not neutral actors. Their “curation algorithms” decide what we see, and in what order. An effective policy for fighting the infodemic must address the way those curation algorithms push hateful, misleading and toxic content to the top of people’s feeds. There are key steps large platforms can take starting now, such as downranking repeat misinformers, and legislation will be key to set transparent, fair, and rights-based standards that platforms must abide by.

2.  Transparency and Audits

The government, researchers and the public must have the tools to understand how social media platforms work and their cumulative impact . The platforms must be required to provide comprehensive reports on misinformation, measures taken against it, and the design and operation of their curation algorithms (while respecting trade secrets). Platforms’ algorithms must also be continually, independently audited to measure impact and to improve design and outcomes.

3.  Correct the Record

When independent fact-checkers determine that a piece of content is misinformation, the platforms should show a retroactive correction to each and every user  who viewed, interacted with, or shared it. If this process is optimized, it can cut belief in false and misleading information by nearly half.

4.  Reform Section 230

Amendments to Section 230 of the Communications Decency Act should be carefully tailored to eliminate any barriers to regulation requiring platforms to address misinformation. However, the Administration should not pursue the wholesale repeal of Section 230 , which could have unintended consequences.

President Biden must also take immediate steps to build anti-disinformation infrastructure:

5. Adopt a National Disinformation Strategy to launch a whole-of-government approach.

6. Appoint a senior-level White House official to the National Security Council to mobilize a whole-of-government response to disinformation.

7. Prioritize the appointment of the Global Engagement Center’s Special Envoy, who is responsible for coordinating efforts of the Federal Government to recognize, understand, expose, and counter foreign state and non-state propaganda and disinformation efforts.

8. Immediately begin working with the EU to bolster transatlantic coordination on approaches to key tech policy issues, including disinformation.

9. Establish a multiagency digital democracy task force to investigate and study the harms of disinformation across major social media platforms and present formal recommendations within six months.

Lawmakers must:

10. Ensure an additional focus on Facebook’s role in undermining the 2020 elections in current congressional investigations and in the proposed Jan. 6 Commission, specifically looking at whether it was used as a tool to radicalize many Americans and/or facilitate the mobilization of radicalized individuals paving the path from election to insurrection.

Avaaz also recommends that Facebook urgently move to adopt stricter “Detox the Algorithm” and “Correct the Record” policies globally. With elections coming up across the world, from Mexico to Germany to Hong Kong, increasing tensions in Ethiopia and the Middle East and North Africa, and the distribution of the COVID-19 vaccine, hundreds of millions of Facebook users continue to be harmed due to the platform’s unwillingness to adopt these policies.

The violence of the insurrection, the Rohingya Genocide in Myanmar, Russian Disinformation campaigns in the 2016 elections, and the Cambridge Analytica scandal are all examples of what happens when Facebook refuses to listen to experts and does not invest in aggressive proactive solutions that mitigate the harms causes by its algorithms and platform.

Facebook is now a key highway at the center of humanity’s information ecosystem, and with that comes significant responsibility towards our societies and the global community that is more important than the company’s profits and stock performance.

Lessons for Facebook:

  • Aggressively and systemically downrank repeat misinformers. Facebook must expand its work with fact-checkers, its moderation capacity, and its expertise to more aggressively apply its downranking measures ahead of the elections, instead of waiting until the week of the election, when many voters were already primed by thousands of misinformation posts. The platform’s targeted downranking policies kicked in just weeks before the vote - largely as a result of the significant investment from civil society organizations, Avaaz, and other actors in this space that focused on uncovering misinformers and getting their content fact-checked. This type of capacity may not exist ahead of future elections, and it is Facebook’s responsibility to make its systems robust. Moreover, Facebook should better and more transparently define who constitutes a “repeat offender” in its terms and conditions. Any users downgraded, and all the users followers, should be supplied with the reasons for the downgrade and users should be provided with a route to appeal. Its policies should be applied equally on all actors who repeatedly share misinformation. Currently, Facebook’s policies appear ad hoc, and conservative/right-leaning pages appear to receive more privileges.

  • “Correct the Record” pro-actively and retroactively. Facebook should follow expert advice on the issuance of corrections to everyone exposed to misinformation. Had it started implementing this solution years ago, it would have prevented voters from being primed with viral fake news that eroded trust in the democratic process. Avaaz estimates that, had Facebook applied expert advice on “Correct the Record” earlier in the election cycle, based on designs recommended by experts in the field, it could have significantly decreased the number of Americans that believed misinformation narratives, including misinformation on the harms of COVID-19 and the moral integrity of Presidential candidates. Even if corrections only decreased the number of those who believed misinformation content by 10%, that would still constitute millions of Americans who had seen misinformation on the platform.

  • Increase transparency. Facebook must provide more transparency to researchers and academics on the gaps in its implementation, working more openly with experts and civil society earlier in the election season to fill these gaps.

  • Improve its detection systems. Facebook should invest more in training its AI and content moderators to detect near-identical matches of false and misleading claims that its fact-checking partners debunked and flagged - and then ensure all fact-checked misinformation is equally labeled.

  • Achieve full labeling parity. Facebook should apply its fact-checking labels to all users equally, instead of providing certain privileges to politicians and key influencers.

  • Expand and optimize ad policies. Facebook did not heed its employees’ advice to treat ads from politicians the same way it treats other content. Facebook should ensure that every political ad is fact-checked, including those run by politicians, and ensure that it has a sophisticated detection system in place so that it can effectively spot and reject ads which contained debunked false and misleading claims. Additionally, it should remove disinformation-spreading ads immediately, work with ad tech partners to vet and remove repeat offenders from access to advertising, and provide fact-checks to everyone exposed to misinformation. Facebook should also transparently disclose the threshold that must be met for an account to have its advertising rights restricted and ensure all accounts are treated impartially, irrespective of political leaning or clout.

Overall, Facebook took a reactive stance on violence-glorifying and potentially inciting activity. It didn’t invest enough resources early on in creating an early warning system and instead only took aggressive action once problematic activity was reported by civil society or offline violence materialized. This is reminiscent of how Facebook responded to its platform being used to fuel genocide against the Rohingya, and to stoke violence in many countries, including India, Sri Lanka, and Ethiopia. Being reactive in this context puts lives in danger. Indeed, significant harm has already been done.

Here’s what Avaaz believes Facebook could have done not just in 2020, but in the years prior to prevent Jan. 6 and other instances of political violence:

  • Be proactive and take a human-led “zero tolerance” approach to hate and violence: Facebook should have engaged more thoroughly and much earlier with experts on militias, extremism and violence to design policies and circuit-breakers that would have prevented such movements from mobilizing and building themselves up on the platform in the first place.

  • Act early:  Facebook should have established early warning systems and taken earlier action against pages (including event pages), groups (both public and private), and accounts tied to these violent, militia groups. It took Facebook until August 19, 2020 to confirm and expand its policies to the scale of the threat - this delay gave these groups years to build their followings on the platform using misinformation narratives. Facebook only introduced stricter policies after months of civil society organizations and journalists tirelessly flagging the scale of the problem.

  • Prioritize addressing violence-glorification content: Facebook should have also detected, downgraded, and demonetized actors systematically sharing violence glorification content and misinformation, and issued corrections to those exposed.

Section Three - Further Findings from Avaaz - Snapshots of Other Facebook Missteps Throughout the 2020 Election Cycle

Throughout 2020, Avaaz’s investigative team closely monitored Facebook’s handling of some of the most troubling misinformation trends as the U.S. was embroiled in the pandemic and in the heat of the election cycle.

The following is a summary of Facebook’s promises in this critical period and snapshots of trends Avaaz produced to measure how well the company met its very own standards throughout the election cycle on:

  • Reducing the spread and impact of misinformation;
  • Stopping violence glorification and inciting activity.

Facebook’s Promises

Facebook promised to combat the spread of misinformation ahead of the US 2020 elections and in the heat of the pandemic by aggressively enforcing the following measures among others:

Issue clear fact-checking labels on misinformation content. Facebook’s fact-checking policy dictates that when its fact-checking partners identify misinformation, it is flagged and labeled. If misinformation content could “contribute to the risk of imminent violence or physical harm” Facebook states that it will remove such content.

Reduce the reach of repeat misinformers. Facebook policy states that “pages and websites that repeatedly share misinformation rated False or Altered will have some restrictions” including having their distribution reduced and/or have their ability to monetize and advertise removed.

Prohibit ads that push debunked claims. Facebook promised to prohibit ads that include claims that have been debunked by their third-party fact-checking partners. However, posts and ads from politicians are exempted from the fact-checking program.


In the heat of a critical and divisive election cycle, it was also imperative that Facebook take aggressive action on all activity that could stoke offline political violence. The company made the following promises, among others, to do just that:

Prohibit the glorification of violence (including political, ideologically-motivated violence). Facebook’s Community Standards state that the glorification of violence or celebration of others’ suffering and humiliation is prohibited.

Expand its “Dangerous Individuals and Organizations” policy to “address militarized social movements” and violence-inducing conspiracy networks. The company expanded its Dangerous Individuals and Organisations policy to address militarized social movements and violence-inducing conspiracy networks. This policy included Boogaloo and QAnon -targeted measures, such as removing movement pages and groups, downranking content in the pages and groups that have been restricted but not removed, banning ads that praise, support or represent militarized social movements and QAnon, as well as redirecting users searching for QAnon-related terms to the Global Network on Extremism and Technology.

Temporarily halt political group recommendations ahead of Election Day. Facebook rolled out a temporary measure to end the recommendation of political groups to users ahead of Election Day.


Snapshots of Facebook’s Missteps and Failures

Misinformation about voter fraud in key swing states garnered over 24 million estimated views
In October 2020, Avaaz’s investigative team found 10 false and misleading claims related to voter fraud in four swing states: Michigan, Pennsylvania, Wisconsin, and Florida. Within these 10 claims, Avaaz documented 50 posts that garnered a combined 1,551,613 interactions (and 24,793,047 estimated views) by October 7, 2020.

Copycats of misinformation went undetected by Facebook’s AI, racking up over 142 million estimated views
Just weeks before Election Day, we uncovered that copycats of misinformation content that Facebook’s very own fact-checking partners had debunked - including election and candidate misinformation - were slipping through the cracks of Facebook’s AI, racking up over 142 million views without fact-checking labels. These included posts claiming that Joe Biden is a pedophile and that Trump’s family had stolen money from a charity for children. One week after Avaaz flagged near-identical clones of misinformation to Facebook, 93% were left on the site untouched.

Facebook overrun with Spanish-language election misinformation after Election Day - not labeled at the same rate as English content
In the 48-hour period after Election Day, Avaaz found false and misleading claims in Spanish spread widely on Facebook, including debunked claims that then-President Trump had been robbed of the election and President-elect Biden had ties to socialism. We also found troubling disparities in the platform’s enforcement of its anti-misinformation policies in response. For example, we analyzed 10 of the highest-performing posts on Facebook promoting false claims from Trump’s legal team about the integrity of the election in both English and Spanish and found that while five out of 10 of the posts in English had been labeled by Facebook, just one out of 10 of the posts in Spanish had been labeled.

Over 60% of election misinformation about Georgia detected by our team went unlabeled by Facebook
In the lead-up to the Georgia Senate run-offs, we caught Facebook failing to issue fact-checking labels on 60 percent of misinformation content we analyzed. It also failed to reduce the reach of repeat misinformers who were repeatedly sharing debunked election and voter fraud claims. The top 20 Facebook and Instagram accounts we found spreading false claims that were aimed at swaying voters in Georgia accounted for more interactions than mainstream media outlets. These misinformers averaged 5,500 interactions on their posts, while the 20 largest news outlets averaged 4,100 interactions per post.

In the final days of the run-offs, we found over 100 political ads that Facebook allowed to flout its anti-misinformation policies to target Georgia voters. These ads garnered over 5.2 million impressions and promoted claims debunked by Facebook’s very own fact-checkers. Nearly half of the ads came from Senate candidates so were exempt from fact-checking interventions and removal.

Facebook allowed top SuperPACs to target swing state voters with false and misleading ads, earning more than 10 million impressions
In September 2020, Avaaz found that Facebook allowed two Super PACs - America First Action and Stop Republicans - to spend $287,500 and $45,000 respectively to collectively run hundreds of political ads containing fact-checked misinformation, which earned more than 10 million estimated impressions, largely in swing states. Only 162 ads of the 481 ads flagged to Facebook were removed immediately after our report was released - all with videos that took Joe Biden’s statements on taxes out of context. After these removals, Facebook subsequently approved America First Action to run more than a hundred new false and misleading ads with different creatives, earning almost five million additional impressions. Facebook removed a number of these ads after Avaaz flagged them, but many stayed on the platform.

Health misinformation spreaders reaped an estimated 3.8 billion views on Facebook
Concerns about COVID-19 became a defining issue in the 2020 election cycle. Despite the importance of accurate health information and narratives, Avaaz found that over 40% of fact-checked COVID-19 misinformation we analyzed was not appropriately labeled as false or misleading across multiple languages, including English and Spanish. These findings flew in the face of the company’s media blitz to publicize its expanded efforts to stop the spread of COVID-19 misinformation. Facebook announced that these efforts had been “ quick, ” “ aggressive, ” and “ executed...quite well. ” In response to our report and its recommendations, Facebook announced it would issue retroactive notifications to all users who engaged with harmful COVID-19 misinformation, directing them to the World Health Organization’s (WHO) myth-busting site.

In a separate, broader investigation on health misinformation’s spread on Facebook, we uncovered health misinformation-spreading networks with an estimated 3.8 billion views in a one-year period (late 2019 to late 2020). We analyzed a sampling of posts shared by these networks and found that only 16%, including COVID-19 conspiracies, had a warning label. One misleading narrative we detected in this investigation, fact-checked by Lead Stories - that doctors were encouraged to overcount coronavirus deaths - had racked up over 160 million views alone. After our report, Facebook acted more aggressively on some of these pages and groups, and removed some of the content we flagged.

Methodologies

Methodology to define top-performing pages that repeatedly shared misinformation

Step 1: Identifying false and misleading posts

We analyzed fact-checked misinformation content published between October 1, 2019 - September 30, 2020 that had been labeled by the independent fact-checking organizations partnering with Facebook (listed below). We put this content in one database, regardless of topic, and reviewed it, only maintaining content that was clearly rated “false” or “misleading”, or any rating falling within these categories, according to the tags used by the fact-checking organizations in their fact-check metadata .

The variation of rating description is quite broad. Here are some examples (full list available on request): “False - Mostly False - Unsupported - Misleading - Pants on Fire - No Evidence - Partly False - Distorts the Facts  - Totally Fake - Not His Quote - Bogus - Old Photo - Fake Tweet - Fake Quotes!”

Facebook’s Third Party Fact-Checkers present in the initial analysis (all US-based):

  • Lead Stories LLC
  • Politifact
  • Check Your Fact
  • Factcheck.org
  • ScienceFeedback
  • HealthFeedback
  • USA Today
  • Reuters
  • AP News
  • AFP

Step 2: Identifying pages that shared misinformation posts multiple times

To identify pages that shared misinformation posts multiple times, we used CrowdTangle 22 . This allowed us to identify which pages or verified profiles on Facebook had publicly shared the fact-checked content that we identified in Step 1.

For content that was mainly image/status/text based (e.g. memes, photos or statuses), it was necessary to search for text from the original post in order to identify public shares of the same content - or variations of it - on Facebook. We selected content that had either over 5,000 interactions or 10 individual public shares on Facebook pages, in Facebook groups or by verified profiles. We searched all external content (links to articles or videos) referred to by fact-checkers, regardless of reach (interactions or public shares).

However, if a page clearly referenced the post as false information (“false positives”) or the page retroactively corrected their post linking to the fact-check for the content (“self corrections”), we removed these pages from the study.

We also excluded posts rated “Satire” by fact-checkers and only included posts rated as false or misleading or any rating falling within these categories.

This gave us 343 different claims in content shared by Facebook pages, verified profiles, groups and users, that had been identified as false or misleading or any label falling within these categories, by Facebook’s third party fact-checking partners, and that fell within our definition of misinformation and/or disinformation.

We included a page in our data-selection of pages that repeatedly shared misinformation if we found it to have shared at least three misinformation claims from our misinformation dataset, which had been fact-checked between Oct 1, 2019 and Oct 5, 2020 AND had shared at least two misinformation posts within 90 days from each other.

We chose the threshold of at least three misinformation claims from our dataset, with two misinformation shares being at least 90 days apart, for the following reasons:

  • Fact-checkers have limited resources, and can only fact-check a subset of misinformation on Facebook. Even then, they often only have capacity to report specific iterations of those claims into Facebook’s fact-checking database to ensure the content is labeled. This indicated to us that the likelihood a page appears three times, with two misinformation posts less than 90 days apart, signified a high likelihood that the page was highly likely to not be seeking to consistently share trustworthy content.
  • If the page had shared at least three separate claims, which had been later labeled by Facebook, but the page did not correct any of the content - that was a further signal that highlighted how prone the page was to sharing misinformation.
  • The top 100 most prominent pages that repeatedly shared misinformation in our list had shared false and misleading claims that were detected and fact-checked by independent fact-checkers, eight times.

We ranked the pages that shared those claims by total interactions (reactions, comments and shares) on their posts between October 13, 2019 and October 13, 2020.

We excluded Facebook pages that did not have most of their Facebook admins based in the United States.

For the purposes of this report, we have elected to not publicly share the list of the most prominent Facebook pages that repeatedly shared misinformation to ensure that the focus remains strictly on the social media platform’s role in not acting earlier to structurally fix its algorithms.

Step 3: Identifying the top 100 Facebook pages from US media

In order to identify the top 100 Facebook pages from US media, we used the AllSides Media Bias database to identify active US media outlets, and then found their corresponding Facebook pages. We ranked these by total engagement (reactions, comments and shares)  on their posts between October 13, 2019 and October 13, 2020.

We excluded Facebook pages that did not have most of their Facebook admins based in the United States.

Calculation for the 10.1 billion estimated views that Facebook could have prevented

Step 1: Calculating the impact of Facebook’s demotions

Based on CrowdTangle data 23 , the most prominent pages sharing misinformation had a 28 percent decline in interactions between the week of October 3, 2020 (66.54 million interactions), and the week of October 18 (47.78 million interactions).  During the weeks between September 28 and October 25, the top-performing pages that repeatedly shared misinformation went from getting, on average, equal engagement amounts to the top media outlets, to getting approximately 18.76 million fewer engagements per week as shown in data accessed through CrowdTangle intelligence, which can be seen in figure 8 below. Moreover, pages our team had monitored, and reported to Facebook and fact-checkers, were publicly claiming being throttled and down-ranked. These signals highlighted to us that Facebook had moved to demoting at least a subset of the pages we had identified.


Figure 8: Date Range:  March 7, 2020 - December 12, 2021 / Graph generated using CrowdTangle24

During the eight month period coinciding with the start of the pandemic, and ranging from March 2020 to October 2020, the prominent pages that repeatedly shared misinformation garnered approximately 1.84 billion interactions based on data our team aggregated using CrowdTangle.

We calculate that, had Facebook acted in March to apply the steps it took which decreased the amount of interactions on these pages received by 28%, this downranking would have prevented 515 million interactions with content from these pages .

Step 2: Calculating the amount of views that would have been prevented

Facebook discloses the number of views for videos, but for posts containing text and image content the platform displays only the number of shares, reactions and comments. Therefore, in order to estimate viewership for text and image content, we designed a ratio based on the publicly available statistics of the Facebook pages creating or sharing the misinformation pieces in this analysis. For each page, we took into account the total number of owned and shared video views between March 1, 2020 and October 1, 2020, and then divided it by the total number of owned and shared video interactions.

Ratio for the top-performing pages that repeatedly shared misinformation

4,583,460,000 / 232,250,000 = 19.74

Applying a ratio of 19.74 to 515 million interactions gives us a total of 10.1 billion estimated views.

Methodology to identify 267 violence-glorifying pages and groups with almost 32 million followers

Between June 1, 2020 and February 24, 2021, pages and groups were included in our dataset if they adopted terms or expressions in their names and/or “About” sections commonly associated with violent extremist movements in the USA (e.g. QAnon, Three Percenters, Oathkeepers, Boogaloo and variants such as "Big Igloo," etc).

Next, our team conducted a wider search to identify additional pages and groups posting relevant content, including posts glorifying violence or that call for, praise, or make light of the death or maiming of individuals for their political beliefs, ethnicity, sexual orientation, as well as tropes and imagery commonly associated with extremist actors (e.g. hashtags such as #wwg1twga, white supremacist imagery, militia emblems, Hawaiian print and "Big Igloo" graphics, etc.). While the final dataset based on the described methodology is by no means exhaustive, it does provide for a preliminary assessment of the activity, reach and impact of these pages and groups.

Methodology to calculate 162m views on the top 100 most popular election misinformation posts/stories

Step 1: Identifying false and misleading election-related content

We analyzed election-related misinformation content published between August 11, 2020 - November 11, 2020 that was fact-checked by the same fact-checking organisations used in the analysis for the top-performing pages that repeatedly shared misinformation in our study. These are major US based fact-checkers, IFCN signatories and all part of Facebook’s Third Party Fact-Checking Program (see above for list).

We defined election-related misinformation content as the following: "Verifiably false or misleading content referring to a candidate standing for office (either presidential or other) in the November US election or referring to the electoral process."

We identified content rated “false” or “misleading”, or any rating falling within these categories, according to the tags used by the fact-checking organizations in their fact-check metadata or published on their website .

As in the analysis for the top-performing pages in our study, the variation of rating description is quite broad. Here are some examples (please contact us for details of the full list we used): “False - Mostly False - Misleading - Pants on Fire - Distorts the Facts - No Evidence - Partly False - Totally Fake - Old Photo - Fake Tweet - Wrong Year”.

Step 2: Identifying the top performing post of each piece of content

Using CrowdTangle 25 , we identified the top performing post, meaning the post that has received the most amount of interactions (reactions, comments and shares). However, if a page clearly presented the post as false information (“false positives”) or the page retroactively corrected its post linking to the fact-check for the content then we excluded this post from our analysis, including the post with the next highest interactions.

Step 3: Ratio calculation

We used the same methodology to estimate views on the 100 top-performing election content stories as we did to calculate the 10.1 billion estimated views figure (please see above for the full description). For this analysis, we took into account the total number of owned and shared video views for the public pages which were top sharers of the content between August 11, 2020 and November 11, 2020 and then divided it by the total number of owned and shared video interactions.

Ratio for the Top 100:

2,936,110,000 / 229,510,000 = 12.79

Applying a ratio of 12.79 to 12,658,859 interactions garnered by this content gives us a total of 161,944,153 estimated views.


Identifying variations on image-based content with and without fact-checking labeling

Using the CrowdTangle search app, we searched text to identify additional variations of image-based content (for example, a meme or photograph) on Facebook.

We identified multiple examples where one variation appeared on the platform where a Facebook false information label had been applied, whereas another variation of the same content appeared with a slight difference in format. Also, in at least one case we identified that the exact same content appeared on the platform with and without a false information label.

Acknowledgment

Avaaz has deployed anti-misinformation teams in the United States, Europe, Brazil, India (the Assam region), and Canada with the goal of detecting and raising awareness about misinformation on social networks and the internet.

Our work on anti-misinformation is rooted in the firm belief that fake news proliferating on social media is a threat to democracies, the health and well-being of communities, and the security of vulnerable people. Avaaz reports openly on what it finds so it can alert and educate social media platforms, regulators, and the public, and so it can more effectively advocate for smart solutions to defend the integrity of elections and our democracies from misinformation.

One of the key objectives of this report is to allow for fact-based deliberation, discussion and debate to flourish in an information ecosystem that is healthy and fair and that allows citizens, voters, politicians and policymakers to make decisions based on the best available information and data. This is also why our solutions do not call for the removal of misinformation content, but rather for corrections that provide users with facts, and for measures to downrank content and actors that have been found to systematically spread misinformation.

We see a clear boundary between freedom of speech and freedom of reach. The curation and recommendation model currently adopted by most social media platforms is designed to maximise human attention, not the fair and equal debate which is essential for humanity to rise to the great challenges of our time.

Finally, if we made any errors in this report — please let us know immediately via media@avaaz.org and share your feedback. We are committed to the highest standards of accuracy in our work. For more details, read the Commitment to Accuracy section on our website here .
Endnotes

  1. Avaaz uses the following definition of misinformation - ‘verifiably false or misleading information with the potential to cause public harm, for example by undermining democracy or public health or encouraging discrimination or hate speech’
  2. A full methodology is above, but this refers to the Facebook pages of the most popular media outlets and ranked the Facebook pages by how many interactions they received on content published between Oct 13, 2019 and Oct 13, 2020.
  3. A full methodology is above, but this refers to the 100 pages on Facebook that received the most interactions on content published between Oct 13, 2019 and Oct 13, 2020, that shared a minimum of three misinformation posts including claims or narratives debunked by fact-checkers between Oct 1, 2019 and Sept 30, 2020, and at least two misinformation posts that had been shared within 90 days from each other. All content was fact-checked by partners of Facebook’s Third Party Fact-Checking Program.
  4. A full methodology is above, but this refers to the Facebook pages of the most popular media outlets and ranked the Facebook pages by how many interactions they received on content published between Oct 13, 2019 and Oct 13, 2020.
  5. It is important to note that during this period and leading up to the elections, fact-checkers also heroically expanded their efforts, as did many other civil society organizations.
  6. Interactions for both lists shot up on November 7, 2020, which was the day the Presidential race was called by news outlets - the increase in engagement was for posts sharing that news.
  7. Data gathered from CrowdTangle Intelligence, a public insights tool owned and operated by Facebook. CrowdTangle is a Facebook-owned tool that tracks interactions on public content from Facebook pages and groups, verified profiles, Instagram accounts, and subreddits. It does not include paid ads unless those ads began as organic, non-paid posts that were subsequently “boosted” using Facebook’s advertising tools. It also does not include activity on private accounts, or posts made visible only to specific groups of followers.
  8. Data from CrowdTangle, a public insights tool owned and operated by Facebook.
  9. Data from CrowdTangle, a public insights tool owned and operated by Facebook.
  10. Data from CrowdTangle, a public insights tool owned and operated by Facebook. Data gathered via CrowdTangle Intelligence and adapted for the needs of this research.
  11. Data from CrowdTangle, a public insights tool owned and operated by Facebook. Data gathered via CrowdTangle Intelligence and adapted for the needs of this research.
  12. Data from CrowdTangle, a public insights tool owned and operated by Facebook. Data gathered via CrowdTangle Intelligence and adapted for the needs of this research.
  13. Data from CrowdTangle, a public insights tool owned and operated by Facebook. Data gathered via CrowdTangle Intelligence and adapted for the needs of this research.
  14. Data from CrowdTangle, a public insights tool owned and operated by Facebook.
  15. Data from CrowdTangle, a public insights tool owned and operated by Facebook. Data gathered via CrowdTangle Intelligence and adapted for the needs of this research.
  16. Please note this section is based on separate research to our investigation into the top-performing pages in the previous section.
  17. Please note this section is based on separate research to our investigation into the top-performing pages in the previous section.
  18. See “Methodology” section for how these pages and groups were identified.
  19. https://www.nbc29.com/2020/05/13/virus-restrictions-fuel-anti-government-boogaloo-movement/
  20. According to our methodology as explained in the section at the end of this document.
  21. See https://edition.cnn.com/2020/11/13/business/stop-the-steal-disinformation-campaign-invs/index.html The article states - on November 5, Bannon started his own "Stop the Steal" Facebook group; the name was changed to "Own Your Vote" later. It was not removed by Facebook, but the social media company did later remove several other pages affiliated with Bannon.
  22. Data from CrowdTangle, a Facebook-owned tool that tracks interactions on public content from Facebook pages and groups, verified profiles, Instagram accounts, and subreddits. It does not include paid ads unless those ads began as organic, non-paid posts that were subsequently “boosted” using Facebook’s advertising tools. It also does not include activity on private accounts, or posts made visible only to specific groups of followers.
  23. Data from CrowdTangle, a public insights tool owned and operated by Facebook.
  24. Data from CrowdTangle, a public insights tool owned and operated by Facebook. Data gathered via CrowdTangle Intelligence and adapted for the needs of this research.
  25. Data from CrowdTangle, a public insights tool owned and operated by Facebook.