Facebook isn’t biased against you

Your friends are

Much has been made lately of social media “bias.” Senator Ted Cruz is holding hearings about it. The White House launched an online tool to report it. Senator Josh Hawley has made it the centerpiece of his time so far in Washington. And, as Megan Hansen and I pointed out last year, the cries of social media bias seems to run both ways across the political spectrum. Companies like Facebook, for example, are being blamed for both “catering to conservatives” and for acting as a network of “incubators for far-left liberal ideologies.”

When it comes down to it, however, the data doesn’t seem to support these claims. As it turns out, it isn’t Facebook hunting down particular posts but other users flagging posts as violations of the platform’s community standards relating to hate speech, bullying, and harassment. Unpacking the numbers in Facebook’s latest transparency report from the first quarter of 2019 suggests that the anecdotes of censorship aren’t lining up with reality.

Facebook’s regularly released transparency report includes a specific section called the “Community Standards Enforcement Report.” As the name suggests, this is Facebook’s rundown of how it’s enforcing the company’s community standards, from rules on nudity to fake accounts to spam to terrorist propaganda. It also covers hate speechbullying, and harassment.

These latter categories (hate speech, bullying, and harassment) are the ones most important to understanding how the company identifies and acts on posts it believes violate the community standards. They are also the ones that deal most specifically with the kinds of speech that many claim the company is inappropriately censoring. It also turns out that they’re the hardest to find and flag.

In the first quarter of 2019, the company acted on approximately 6.6 million posts that fell within the categories of hate speech, bullying, and harassment. That includes around 4 million posts for hate speech (chart on the left) and 2.6 million for bullying and harassment (chart on the right).

More important than the number is how these posts came to Facebook’s attention. While Facebook’s content moderation tools are really good at proactively identifying nudity (96.80% of the 19.4 million posts the company acted on were found and flagged before users reported it), graphic content (98.9% of 33.6 million posts), and terroristic content (99.3% of 6.4 million posts), the company relies heavily on users to report hate speech, bullying, and harassment.

In fact, as the charts below show, a majority (about 55%) of the 6.6 million posts acted on in these categories were user reported. Specifically, looking at the chart on the left, around 1.4 million (34.6%) of the 4 million posts flagged for hate speech were user reported. From the chart on the right, about 2.2 million (85.9%) of the 2.6 million posts flagged for bullying and harassment were user reported.

For those who want to claim that it is Facebook actively regulating speech and censoring posts, the stats should come as a wake-up call. While it’s easy to claim Facebook is out to limit certain speech, the data suggest that a majority of the takedowns in the areas of hate speech, bullying and harassment are user-reported.

Inthis latest transparency report, Facebook also provides new data on its process for appealing and restoring content to correct mistakes in enforcement decisions. If these stats show one thing, it is that the company’s enforcement actions are surprisingly accurate. Overall, for posts that Facebook acted on for hate speech, bullying, and harassment, it restored around 3.5% of them. That amounts to restoring approximately 235,000 of the 6.6 million posts. In other words, Facebook stood by its initial determination around 96.5% of the time.

In addition, we have some insight into the appeals users made in response to Facebook’s actions. Of the 6.6 million posts removed, nearly 1.6 million (23%) actions were appealed. Those posts acted on for hate speech were appealed at a slightly higher rate (28%) than those for bullying and harassment (19%).

There are likely a couple of explanations for why Facebook sees higher rates of appeals for hate speech than bullying and harassment. First, more than two-thirds of the posts acted on for hate speech were found and flagged by Facebook. As a result, this might be reflecting just how hard it is for the company’s artificial intelligence to understand the context of a particular post. According to the community standards, the same language could be deemed objectionable or not depending on how and why we use it. Here’s how Facebook explains it:

Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others. In some cases, words or terms that might otherwise violate our standards are used self-referentially or in an empowering way. People sometimes express contempt in the context of a romantic break-up. Other times, they use gender-exclusive language to control membership in a health or positive support group, such as a breastfeeding group for women only. In all of these cases, we allow the content but expect people to clearly indicate their intent, which helps us better understand why they shared it. Where the intention is unclear, we may remove the content.

Second, there is little disagreement over what constitutes nudity. Instead, people disagree with the company’s stance on when nudity is appropriate for artistic reasons, and few would debate whether a person is actually nude. With hate speech, however, the lines are much less clear. Vanity Fair recently profiled the company’s nearly impossible struggle over the issue: Several dozen facebook deputies — engineers, lawyers, and PR people — — trying to create a single set of standards to govern every conversation had by every one of the 2 billion users worldwide.

Even given this difficult position, the company seems quite accurate in its decision making if measured by how often it affirms its initial decision. Looking at the nearly 1.6 million appeals, for example, Facebook affirmed 1.4 million of its decisions. That gives Facebook an 87% accuracy rate. That means its system is only wrong 13% of the time.

There are a few takeaways for those concerned about bias in social media and the need for some external regulation of speech on these platforms. But perhaps the most important is that users are the largest source of flagged material on the platform when it comes to bullying and harassment. This should be a strong indication that it is the users, rather than the employees, that are a key driver of enforcement in this area. That’s not to say that every report made by users is worthwhile (they certainly aren’t), but a majority of what’s being acted on is brought to Facebook’s attention rather than the company seeking out particular posts.

Moreover, while companies like Facebook struggle over the details of what is acceptable on their platforms, and they do have real power to dictate free speech for over 2 billion people around the world, critics should withhold accusations of bias or censorship for now. As Vanity Fair demonstrated so clearly, the struggle over whether a phrase like “men are scum” is appropriate is not the result of some underlying bias or confusion. Speech codes are just impossible to design at the scale Facebook is operating.

But that should not distract from the underlying truth here: If the data from Facebook’s latest transparency report is any indication, it isn’t Mark Zuckerberg that doesn’t like what you’re posting. It’s your Facebook friends.

CGO scholars and fellows frequently comment on a variety of topics for the popular press. The views expressed therein are those of the authors and do not necessarily reflect the views of the Center for Growth and Opportunity or the views of Utah State University.