AI is not smart enough to solve Meta's content-policing problems, whistleblowers say
Advertisement
Isobel Asher Hamilton
Jun 15, 2022, 20:24 IST
Frances Haugen leaves the UK Houses of Parliament on October 25, 2021.REUTERS/Henry Nicholls
Facebook whistleblower Frances Haugen appeared at an event with ex-Facebook moderator Daniel Motaung.
Haugen said although Meta talks up AI as a moderation tool, it is too blunt an instrument.
Advertisement
Artificial intelligence is nowhere near good enough to address problems facing content moderation on Facebook, according to whistleblower Frances Haugen.
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More
But according to Haugen and Motaung, this is a smokescreen that obscures the work done by thousands of human moderators, some of whom suffer severe mental health issues they say come as a result of their work.
AI is just not as smart as Silicon Valley makes out
Haugen said AI at the moment is nowhere near intelligent enough to interpret the nuances of human speech.
Advertisement
She said using AI for moderation means there's always a tradeoff between having an overbearing algorithm that removes content that doesn't violate content policies, and having an algorithm that misses instances of such content.
"Those AIs can only be so precise," she said. "People who actually work on it call it machine learning, because it is not intelligent."
"It is only learning inferences, statistical inferences from the data that is fed in."
Meta's focus on AI is in part driven by a desire to cut costs, Haugen claimed.
Advertisement
From left to right: TIME journalist Billy Perrigo, Facebook whistleblower Frances Haugen, Foxglove director Cori Crider, and ex-Facebook moderator Daniel MotaungSheridan Flynn
"Facebook is always trying to ... have more of the work, be done by computers, by AI, and less being done by people," Haugen said, saying this is because employing human moderators is more expensive than using automated systems.
When Facebook's AI systems can't make a judgement call on a piece of content, it's up to human moderators to review it.
Motaung said in his lawsuit against Meta that working as a moderator and viewing graphic, disturbing content all day resulted in him getting PTSD.
Motaung said Tuesday the work left him "broken."
During Tuesday's event both Motaung and Haugen said Meta could bring about changes to make work for its content moderators safer.
"Even saying: 'Hey, you are looking at things that traumatize you every day. We're going to pay you for a full week, but you only have to come in every other day ... that is a real intervention," Haugen said.
"That would radically reduce the harm, and they choose not to spend even that small amount of money," she added.
Advertisement
Motaung and Haugen both said Meta outsources its moderation work to contractors in order to limit liability.
In a message to Mark Zuckerberg, Motaung said he wants to know if Zuckerberg would intervene to help moderators. "Go and take care of it. Do what's right," Motaung said.
Meta did not immediately respond when contacted by Insider for comment.
{{}}
NewsletterSIMPLY PUT - where we join the dots to inform and inspire you. Sign up for a weekly brief collating many news items into one untangled thought delivered straight to your mailbox.