TikTok is automatically removing videos showing nudity, sexual activity, violence, and other content that violates its safety policy for minors

Advertisement
TikTok is automatically removing videos showing nudity, sexual activity, violence, and other content that violates its safety policy for minors
Human staff will have more time to focus on nuanced content like hate speech and misinformation. Rafael Henrique/SOPA Images/LightRocket/Getty Images
  • New tech from TikTok will automatically review videos that violate its safety policy for minors.
  • This will enable human staff to focus on more nuanced content like hate speech and misinformation.
  • Accounts will be removed under a zero-tolerance policy, such as posting child sexual abuse material.
Advertisement

TikTok is rolling out technology that will automatically remove videos showing nudity, sexual activity, violence, and other content that violates its safety policy for minors.

The company will partly automate the review system that blocks these sort of videos as well as graphic content, illegal activity, and other content that violates its minors' safety policy in the US and Canada, it said Friday.

TikTok is making the move in part to reduce the number of distressing videos that its human moderators have to review, said Eric Han, TikTok's head of US safety. This will allow them to spend more time on nuanced videos involving hate speech, bullying, and misinformation, he said.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Prior to this move, TikTok's human moderators reviewed all videos before making decisions on removal.

TikTok acknowledged that no technology can be entirely accurate, so creators will be immediately notified and given a reason if their video is removed. They can then appeal the decision.

Advertisement

In the past, staff for social media giants like Facebook have had to deal with post-traumatic stress disorder for reviewing horrific content as part of their job. A former Facebook moderator, who had to review about 1,000 pieces of content per night, once sued the company for having to filter out disturbing content.

TikTok said its safety team would continue to review community reports and appeals to remove content that violates its policies. More frequent violations could result in suspension of an account's ability to upload a video, comment, or edit their profile between 24 and 48 hours, the company said.

Under a zero-tolerance policy, such as posting child sexual abuse material, an account would automatically be removed from the platform.

TikTok said it had initially tested the automated technology in other countries, including Brazil and Pakistan.

TikTok identified and removed more than 8.5 million videos in the US in the first-quarter of 2021. That means under automated review, thousands of videos could end up being removed in error.

Advertisement

The automation is expected to roll out "over the next few weeks," TikTok said.

{{}}