Facebook is using AI and thousands of employees to weed out terrorists
In an interview with West Point's Combating Terrorism Center published Thursday, Brian Fishman said that Facebook had 4,500 employees in community operations working to get rid of terrorism-related and other offensive content, with plans to expand that team by 3,000.
The company is also using artificial intelligence to flag offending content, which humans can then review.
"We still think human beings are critical because computers are not very good yet at understanding nuanced context when it comes to terrorism," Fishman said. "For example, there are instances in which people are putting up a piece of ISIS propaganda, but they're condemning ISIS. You've seen this in CVE [countering violent extremism] types of context. We want to allow that counter speech."
Facebook is also using photo and video-matching technology, which can, for example, find propaganda from ISIS and place it in a database, which allows the company to quickly recognize those images if a user on the platform posts it.
"There are all sorts of complications to implementing this, but overall the technique is effective," Fishman said. "Facebook is not a good repository for that kind of material for these guys anymore, and they know it."
- Tesla tells some laid-off employees their separation agreements are canceled and new ones are on the way
- Taylor Swift's 'The Tortured Poets Department' is the messiest, horniest, and funniest album she's ever made
- One of the world's only 5-star airlines seems to be considering asking business-class passengers to bring their own cutlery
- UP board exam results announced, CM Adityanath congratulates successful candidates
- RCB player Dinesh Karthik declares that he is 100 per cent ready to play T20I World Cup
- 9 Foods that can help you add more protein to your diet
- The Future of Gaming Technology
- Stock markets stage strong rebound after 4 days of slump; Sensex rallies 599 pts