Twitter says it's been making progress when it comes to ridding its site of spam and abuse. The platform — which has 300 million monthly users, according to Recode — said in an April blog post that rather than relying on users to flag abuse, it's implemented technology that automatically flags offensive content.
Twitter said that "by using technology, 38% of abusive content that's enforced is surfaced proactively for human review instead of relying on reports from people using Twitter."
Yet, Twitter just can't seem to figure out how to block white supremacists. Vice reported that a Twitter employee said that "Twitter hasn't taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians."
"The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material," Vice reported.
Source: Vice, Twitter, Recode
Got a tip on content moderation and product regulation at major tech companies? Reach this article's writer, Rebecca Aydin, via email at raydin@businessinsider.com