Facebook used experimental audio tech to shut down videos that AI missed after the New Zealand mosque massacres

Advertisement
Facebook used experimental audio tech to shut down videos that AI missed after the New Zealand mosque massacres

Mark Zuckerberg

Photo by Chip Somodevilla/Getty Images

Facebook's handling of the New Zealand massacre footage has come under fire from politicians across the world.

Advertisement
  • Facebook has said it deployed experimental audio tech to support its AI in detecting and blocking copies of the New Zealand mosque shootings video.
  • In a blog published on Wednesday, the social media company admitted its use of AI for detecting harmful content was "not perfect."
  • Facebook has faced widespread criticism for its handling of the footage, with New Zealand's prime minister calling for change.

Facebook has said it used experimental audio technology to help catch copies of the New Zealand mosque shootings video that its AI failed to detect.

In a blog published on Wednesday, the social media giant said it "employed audio matching technology to detect videos [of the shooting] which had visually changed beyond our systems' ability to recognize automatically but which had the same soundtrack."

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Although the blog did not offer more detail on how the audio tech worked, it seems that it helped catch different versions of the video which its AI struggled to recognize, such as a recording of the video on a computer screen then uploaded to Facebook.

"Many people have asked why artificial intelligence didn't detect the video from last week's attack automatically," said Guy Rosen, Facebook's vice president of integrity. "AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove. But it's not perfect."

Advertisement

He added: "AI systems are based on 'training data', which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video.

"This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems.

"However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare."

READ MORE: Facebook says no one reported the New Zealand mosque shootings live video. But a reporter says he raised the alarm mid-attack.

Rosen said that its AI systems can find it tricky distinguishing between real-world footage from video game footage.

Advertisement

"Another challenge is to automatically discern this content from visually similar, innocuous content - for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground."

New Zealand's Prime Minister Jacinda Ardern speaks to several hundreds of well wishers in front of the parliament on Thursday, Oct. 26, 2017, in Wellington, New Zealand.

AP Photo/Nick Perry

New Zealand's Prime Minister Jacinda Ardern speaks to several hundreds of well-wishers in front of the parliament on Thursday, Oct. 26, 2017, in Wellington, New Zealand. Ardern was sworn in as prime minister on Thursday and said she will lead a government that's active, focused, empathetic and strong.

Wednesday's update represents Facebook's latest attempt to save face after its handling of the Christchurch massacre received criticism from politicians around the world.

Australian Prime Minister Scott Morrison expressed worry over the "unrestricted role" of internet technologies in terror attacks, while UK Home Secretary Sajid Javid tweeted that Facebook, Google, Twitter, and YouTube need to "do more ... to stop violent extremism being promoted on [their] platforms."

Meanwhile, New Zealand's prime minister Jacinda Ardern has said she wants to ensure the "horrendous" footage cannot be viewed, and has been in talks with Facebook's chief operating officer, Sheryl Sandberg.

Advertisement

In a speech to Parliament on Tuesday, Ardern ramped up her criticism of Facebook. "We cannot simply sit back and accept that these platforms just exist and what is said is not the responsibility of the place where they are published," she said. "They are the publisher not just the postman."

{{}}