Facebook bans deepfake videos and manipulated content from site

Advertisement
Facebook bans deepfake videos and manipulated content from site
FILE PHOTO: Silhouettes of laptop users are seen next to a screen projection of a Facebook logo, March 28, 2018. REUTERS/Dado Ruvic/File Photo
  • Facebook said in an announcement on Monday that deepfake videos and manipulated media will be banned from the social media site.
  • The company said in a statement that it was taking a multi-pronged approach to address the issue, including investigating deceptive behaviors in AI-generated content and partnering with academia, government, and industry to better identify manipulated content.
  • Videos that are flagged will be subject to review by third party fact-checkers to determine if the content is false.
  • In September, Facebook CTO Mike Schroepfer said the company was making its own deepfake content in order to better detect manipulated content for removal, Business Insider's Alexei Oreskovic reported.
  • Visit Business Insider's homepage for more stories.

Facebook said in an announcement Monday that deepfake videos and manipulated media will be banned from the social media site.

Advertisement

The company said in a statement that it was taking a multi-pronged approach to address the issue, including investigating deceptive behaviors in AI-generated content and partnering with academia, government, and industry to better identify manipulated content.

"Manipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or 'deep learning' techniques to create videos that distort reality - usually called 'deepfakes,'" the company said in a statement. "While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases."

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

The company defined the type of "deepfake" content that will be removed from the site as the following:

  • It has been edited or synthesized - beyond adjustments for clarity or quality - in ways that aren't apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
  • It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

The policy does not affect content that is manipulated for the purpose of comedy or satire, the company said in the statement.

Advertisement

Videos that are flagged will be subject to review by third party fact-checkers to determine if the content is false. At that point, Facebook will "significantly reduce its distribution" to newsfeeds or reject it, if it is attempting to run as an ad.

Moreover, people who attempt to share the content before it is completely removed from the site will be issued a warning to alert them that the content is false.

The company added that it established a partnership with Reuters to "help newsrooms worldwide to identify deepfakes and manipulated media through a free online training course."

"News organizations increasingly rely on third parties for large volumes of images and video, and identifying manipulated visuals is a significant challenge," the statement read. "This program aims to support newsrooms trying to do this work."

In September, Facebook CTO Mike Schroepfer said the company was making its own deepfake content in order to better detect manipulated content for removal, Business Insider's Alexei Oreskovic reported.

Advertisement

"The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer," Schroepfer wrote in a September blog post.

Last year, the company faced significant backlash from members of Congress, including Rep. Alexandria Ocasio-Cortez and presidential candidate Sen. Elizabeth Warren, for its policy on fact-checking political ads.

Facebook said in most instances it will not fact-check political ads in an attempt to protect free speech. The controversial policy went under fire after President Donald Trump ran political ads with false claims about former Vice President Joe Biden.

"Posts and ads from politicians are generally not subjected to fact-checking," according to Facebook's policy. "In evaluating when this applies we ask our fact-checking partners to look at politicians at every level."

{{}}