Facebook's CTO is so shaken by the toxic and violent content that has overwhelmed the social network that he cried in a New York Times interview

Mike Schroepfer facebook

Reuters/Peter Nicholls

Facebook chief technology officer Mike Schroepfer.

The pressure Facebook has faced trying to eliminate violent and offensive content from the platform is enough to make a grown man cry - literally, if you ask Facebook executive Mike Schroepfer.

Schroepfer, Facebook's chief technology officer, teared up multiple times during a series of interviews with the New York Times about the platform's recent policing efforts. Criticism of the platform has ramped up since the terrorist attacks in March on Christchurch, New Zealand, during which the shooter live streamed his attack on Facebook.
Schroepfer "choked up" when talking about "the scale of the issues that Facebook was confronting and his responsibilities in changing them," the Times reports.

"It won't be fixed tomorrow," Schroepfer said about Facebook's efforts. "But I do not want to have this conversation again six months from now. We can do a much, much better job of catching this."

Read more: Facebook is dialling up punishments for users who abuse live video after the Christchurch massacre

The CTO is known for "often" letting his feelings shows, "many" people told the Times. A former Facebook employee, venture capitalist Jocelyn Goldfein, said she'd seen Schroepfer cry in the office when she worked for the social platform.

Schroepfer has been tasked with building artificial intelligence tools for Facebook that will better work to detect harmful content, and can prevent something like the Christchurch shooting from being broadcasted on Facebook again.To figure out how Facebook's technology can best identify the next terrorist-related video, Schroepfer had to watch the gruesome footage of the shooting "several times," according to the Times.

"I wish I could unsee it," Schroepfer said.

Facebook has taken some steps to avoid an incident like the New Zealand shooting livestream from repeating itself. The platform has implemented a "one strike" policy that blocks users immediately from live streaming if they violate Facebook's "most serious" rules.

The company has also invested $7.5 million into research on better techniques for detecting videos that have been manipulated, which is how millions of re-postings of the Christchurch shooting were able to get past Facebook's automated system and spread online.

However, Schroepfer told the Times that his task of removing harmful posts is a complex one without an "endgame."

The number of posts is "never going to go to zero," he said.