Facebook says it will now ban 'implicit hate speech' like blackface and anti-Semitic stereotypes

Advertisement
Facebook says it will now ban 'implicit hate speech' like blackface and anti-Semitic stereotypes
Facebook Chairman and CEO Mark Zuckerberg testifies at a House Financial Services Committee hearing in Washington, U.S., October 23, 2019.REUTERS/Erin Scott
  • Facebook will now ban content that includes "implicit hate speech" like blackface or anti-Semitic stereotypes, the company announced Tuesday.
  • Facebook has faced mounting pressure from civil rights groups to ramp up its enforcement of anti-hate speech policies, culminating in an advertiser boycott last month over hate speech on the platform.
  • The company said Tuesday that it has been ramping up its artificial intelligence that detects hate speech — 95% of the hate speech it removed between April and June was detected by its AI.
  • Facebook will also undergo a quarterly third-party audit of its hate speech moderation starting in 2021.
Advertisement

Facebook is tweaking is community standards to ban "implicit hate speech" on its platforms, and will soon take down content in violation of the policy like blackface and anti-Semitic stereotypes.

Facebook has faced mounting scrutiny from civil rights groups in recent months over concerns about the spread of hate speech and misinformation on its platform. Under pressure from activists, dozens of advertisers joined a boycott of Facebook's ad platform in the past two months, likely costing Facebook millions in revenue.

The new policy is meant to remove offensive content that previously skirted Facebook's ban on hate speech and was made after consultation with outside experts, vice president of content policy Monika Bickert told reporters Tuesday.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

"This type of content has always gone against the spirit of our hate speech policy, but it can be really difficult to take concepts especially those that are commonly expressed in imagery and define them in a way that allows our content reviewers based around the world to consistently and fairly identify violations," Bickert said.

She added that the policy would not affect certain content with news value, like posts displaying a politician's use of blackface.

Advertisement

The company said on Tuesday that it has ramped up its artificial intelligence that detects hate speech — 95% of the hate speech it removed between April and June was detected by its AI, up from 89% in the first quarter of 2020, according to its latest transparency report.

It's also increased the volume of posts removed in the past quarter — Facebook removed 22.5 million posts for violating its community standards between April and June, up from 9.6 million in the first quarter of 2020.

But some of Facebook's content moderation has faltered due to working conditions under the COVID-19 pandemic. Because fewer human content moderators have been able to return to offices — and Facebook doesn't allow content moderators to work from home due to the nature of their work — the company took fewer actions on some categories in the past quarter. Those include self-harm and child exploitative content, vice president of integrity Guy Rosen told reporters Tuesday.

Rosen added that, to address concerns about its content moderation, Facebook will now undergo a quarterly third-party audit of its hate speech moderation starting in 2021.

You can read Facebook's full transparency report for the second quarter of 2020 here.

Advertisement
{{}}