TikTok's struggles to delete a video of an apparent suicide show its powerful algorithm can spread pernicious content fast

Advertisement
TikTok's struggles to delete a video of an apparent suicide show its powerful algorithm can spread pernicious content fast
BigTunaOnline/Shutterstock
  • TikTok has struggled to delete footage of an apparent suicide that has been circulating since Sunday night.
  • The footage was first livestreamed to Facebook, but spread to other platforms including TikTok.
  • The features that make TikTok so compelling also help pernicious content spread fast, and this could prove to be the app's Christchurch moment.
  • "It was only a matter of time before a video like that would surface mainstream on TikTok," said Eva Fog, an expert on children and technology.
Advertisement

TikTok users have been warning fellow app users to quickly scroll past a video that appears to show a 33-year-old man, Ronnie McNutt, dying by suicide.

The video was first livestreamed to Facebook, which did not immediately remove the footage despite being alerted to its distressing nature.

It has been reposted on a number of sites and platforms in recent days, including TikTok. Versions of the footage are still available on TikTok as of Wednesday morning when checked by Business Insider.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Despite originating from another platform, the distressing video could well prove to be TikTok's 'Christchurch moment' — echoing Facebook's challenges to stop reposts of a livestreamed video of a mass shooting in Christchurch, New Zealand in 2019, from circulating on its platform.

TikTok's compelling features make it easy to spread distressing content

The challenge for TikTok is that the features that make its app so addictive and enthralling — full-screen, autoplaying video which throws users rapidly into the action taking place in a video — are what make the spread of the video so pernicious.

Advertisement

That TikTok's algorithm tests out vast numbers of videos on users in quick succession also makes it more difficult to stop the spread of the video.

Rather than opting in to seeing the video, as is the case on sites like 4chan, LiveLeak and Reddit, TikTok users are having the video foisted upon them.

Some malicious users are inserting the clip of McNutt's death into otherwise innocuous videos in an attempt to trick users into watching the footage. Others are using GIFs of the footage as their profile pictures.

It's particularly troubling given TikTok's young userbase: 43% of the app's users in the UK are under the age of 14, according to internal data leaked to The New York Times.

"It is an important challenge to win this cat-and-mouse game, as tech firms try to take down this horrific content while malicious actors continue to spread it," said Andy Burrows, head of child safety at the NSPCC, a UK children's charity.

Advertisement

"It was only a matter of time before a video like that would surface mainstream on TikTok," added Eva Fog, an expert and digital educator focused on children, gender and technology.

"The question is: How did a suicide pass through the otherwise rigorous screening, and how was it allowed to go viral? The cleverness of the TikTok algorithm is as much of a weakness as it is a strength, as we've seen many times with videos being taken down for no obvious reason, while others stay up."

Bigger platforms adopt moderation techniques used for terrorist or child abuse content

One of the ways social media platforms are trying to stop the spread of such videos is to adopt techniques developed to try and slow the reach of terrorist propaganda and images of child abuse.

In 2016, Facebook, Microsoft, Twitter and YouTube announced an initiative to curb the spread of terrorist content.

The companies would develop a database of hashes – or digital fingerprints that identify specific videos – that they would share within the industry to better tackle reposts of questionable material.

Advertisement

That database, overseen by the Global Internet Forum to Counter Terrorism (GIFCT), today contains more than 200,000 hashes. A separate initiative to assess content circulating online from terror attacks, was put into action 70 times between March 2019 and March 2020. TikTok is not a member of GIFCT, though it has developed a Content Advisory Council.

TikTok has its own automated system that detects and flags clips that violate policies displaying, praising, glorifying or promoting suicide.

Its moderation policies appear to be more interventionist than YouTube's: it took down nearly 50 million videos in the second half of 2019, 98.2% of which were automated. Nine in 10 of those videos were never seen by users.

By comparison, 50% of YouTube's removed videos had been seen by someone before action was taken.

Around a quarter of the videos removed from TikTok in December 2019 violated the app's suicide, self-harm or dangerous acts policy, or contained violence or graphic images.

Advertisement

A TikTok spokesperson said in a statement provided to reporters: "We are banning accounts that repeatedly try to upload clips, and we appreciate our community members who've reported content and warned others against watching, engaging, or sharing such videos on any platform out of respect for the person and their family.

"If anyone in our community is struggling with thoughts of suicide or concerned about someone who is, we encourage them to seek support, and we provide access to hotlines directly from our app and in our Safety Center."

{{}}