Facebook is still failing to enforce its own rules against election disinformation, conspiracy theories, hate speech, and other policies, another new analysis finds

Advertisement
Facebook is still failing to enforce its own rules against election disinformation, conspiracy theories, hate speech, and other policies, another new analysis finds
Mark Zuckerberg testifies on Capitol Hill, in Washington, DC, October 23, 2019.REUTERS/Erin Scott
  • Facebook has been unable, or in some cases, unwilling to enforce its own policies prohibiting certain types of content, according to an analysis by The Wall Street Journal on Thursday.
  • The Journal reported hundreds of posts to Facebook that the company later confirmed violated its policies, but the platform still allowed many to remain up.
  • In other cases, Facebook's AI systems were slower than promised or inconsistent when reviewing content, which a spokesperson told The Wall Street Journal was due to the company prioritizing content that might go viral.
  • Facebook's size already makes content moderation a massive challenge for the platform, but the coronavirus pandemic, a heated political climate, and upcoming elections have all exacerbated it.
Advertisement

Under pressure both internally and externally to crack down harder on toxic content, Facebook has rolled out a series of new policies in the last year on everything from hate speech to coronavirus misinformation to foreign election interference.

But a new analysis from The Wall Street Journal on Thursday revealed that Facebook is still failing to enforce those policies consistently, quickly, or sometimes even at all.

Users have the ability to report content on Facebook they believe violates its policies. But of the 150 pieces of such content that The Journal reported this way — and that Facebook later confirmed violated its rules — the company's review process didn't take down the content in roughly 25% of cases.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

In September, The Wall Street Journal also reported 276 pieces of content for apparently violating Facebook's rules against promoting or violence or spreading dangerous misinformation, but its content moderation system removed the posts in just 32 cases. After contacting Facebook directly about the remaining 244 posts, The Wall Street Journal reported that the company determined more than 50% of those should also have been removed, and eventually took down many of them.

Facebook spokesperson Sarah Pollack told The Wall Street Journal that its analysis wasn't reflective of the overall accuracy of its review systems, and that the company's "priority is removing content based on severity and the potential for it going viral."

Advertisement

Facebook did not respond to a request for comment on this story.

While around 50% of reported content was reviewed by Facebook's systems within 24 hours as it initially promised, some posts — including some inciting violence or promoting terrorism — still hadn't been reviewed as much as two weeks later, according to The Journal.

Pollack told the newspaper that Facebook has become more reliant on AI systems due to the coronavirus pandemic and improving AI technology.

Facebook, Google's YouTube, and other social media platforms have been eager to bring back moderators into the office after the pandemic forced them to become more reliant on automated systems (as many companies limit them from working from home due to security and privacy concerns).

This is not the first time Facebook encountered scrutiny over the degree to which it's actually enforcing new policies. Facebook came under fire just a week after rolling out its anti-militia policy when it refused to take action against accounts and groups promoting violence against anti-police-brutality protesters in Kenosha, Wisconsin — where a teenager eventually fatally shot two protesters — despite multiple reports from users.

Advertisement

The Wall Street Journal also reported in August that Facebook refused to enforce its hate speech policies in India after a top executive there overruled moderators.

A Facebook whistleblower also penned a memo documenting the company's pattern of neglecting election disinformation and coordinated attempts to undermine democratic processes around the world, particularly outside of the US and Western Europe BuzzFeed News reported in September.

"It's an open secret within the civic integrity space that Facebook's short-term decisions are largely motivated by PR and the potential for negative attention," she wrote, according to BuzzFeed News.

Facebook has similarly acted on many controversial pieces of content only after media outlets or other outside observers have reported the content — and even sometimes not then, as a report from Muslim Advocates concluded last month.

Most recently, the company came under fire for attempting to limit the reach of controversial stories by the New York Post this week that relied on dubious reporting, both from conservatives who believed the platform was engaging in unfair censorship and from others who called its attempt ineffective — the post was shared 300,000 times even after Facebook intervened.

Advertisement

Facebook's massive content moderation challenge has taken on an increased priority amid a contentious political environment where mis- and disinformation has circulated widely, from a variety of sources ranging from President Donald Trump to Russia, China, and Iran.

{{}}