Facebook is dialling up punishments for users who abuse live video after the Christchurch massacre
- Facebook announced Tuesday that it is tightening its rules around live streaming following the Christchurch mosque shootings.
- Facebook will now act on a "one strike" basis for live videos, blocking users immediately from broadcasting live if they seriously violate Facebook's policies.
- The company is also investing $7.5 million in research to develop new ways to prevent people circulating videos like Christchurch, using editing tricks to slip past its automated systems.
- It means Facebook is eschewing more radical suggestions like slapping a time delay on live videos.
- Visit Business Insider's homepage for more stories.
Facebook is taking steps to prevent atrocities like the Christchurch massacre from being broadcast live on its platform.When the Christchurch shooter opened fire on Muslim worshippers in March, he streamed the attack live on Facebook. Facebook said that in the first 24 hours it removed 1.5 million iterations of the video, with 1.2 million videos blocked at upload. Other platforms such as YouTube similarly struggled to stem the tide of videos.Advertisement
Read more: YouTube's human moderators couldn't stem the deluge of Christchurch massacre videos, so YouTube benched them
Long after the attack, however, copies of the video were still to be found on the Facebook and Instagram, and officials in New Zealand and beyond were sharply critical of Facebook's response.VP of Integrity Guy Rosen published a blog on Tuesday outlining the new rules for users surrounding infringements on Facebook Live, saying Facebook will now operate on a "one strike" basis.
"From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time - for example 30 days - starting on their first offense," Rosen wrote. He gave an example of a serious violation as sharing a link to a terrorist group's statement with no context.Advertisement
Facebook has previously said this would be impractical. In a blog last month, the social network said a time delay would be difficult because there are millions of live streams a day, it would "further slow down" the reporting and review of harmful videos, and delay first responders.
The problem isn't isolated to bad actors deliberately posting abusive content. "One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack. People - not always intentionally - shared edited versions of the video, which made it hard for our systems to detect," said Rosen.To combat this, Rosen announced that the company is investing $7.5 million in research from The University of Maryland, Cornell, and Berkeley with a view to developing new and better techniques for automatically spotting manipulated footage.Advertisement
"This work will be critical for our broader efforts against manipulated media, including deepfakes (videos intentionally manipulated to depict events that never occurred). We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack," he said.