Facebook's AI moderation reportedly can't interpret many languages, leaving users in some countries more susceptible to harmful posts
- Facebook's automated content moderators can't speak many languages used on the site.
- Human moderators also can't speak languages used in some foreign markets
- The blind spots sometimes let bad actors post harmful, violent content and conduct illegal business.
View all Offers
AmazonBasics High Back Executive Chair (Brown, Leather)₹ 9999₹ 18000Buy On
- 33% OFF
Spacewood Engineered Wood Winner Study Table (Natural Wenge Finish)₹ 7999₹ 11090Buy On
- 32% OFF
Cello Novelty Plastic Big 2 Door Cupboard - Blue and Grey₹ 4789₹ 7020Buy On
Amazon Brand - Solimo Comber High Back Mesh Office Chair (Grey & White)₹ 10089₹ 40000Buy On
- 11% OFF
Duroflex Back Magic - Doctor Recommended Orthopaedic High Density Coir, 5 Inch Queen Size Firm Mattress for Back Support and Posture Alignment (78 X 60 X 5 Inches)₹ 15719₹ 20809Buy On
Facebook's artificial intelligence-powered content moderators can't read some languages used on the platform, raising concerns about how the company is policing content in countries that speak languages other than English, The Wall Street Journal reported Thursday.
The paper viewed company documents that show Facebook doesn't have enough employees capable of speaking local languages to monitor happenings in other countries, markets that the company has expanded into to bolster its non-US userbase. More than 90% of Facebook's monthly users are outside North America, per the paper.
The report shows how the lack of human moderators with multi-lingual skills - combined with the shortcomings of relying on robots to weed out toxic posts - is weakening Facebook's ability to monitor harmful content online, a topic that has brought it under heavy scrutiny for the company in recent years.
Facebook employees have expressed concerns about how the system has allowed bad actors to use the site for nefarious purposes, according to the documents viewed by The Journal.
A former vice president at the company told the paper that Facebook perceives potential harm in foreign countries as "simply the cost of doing business" in those markets. He also said there is "very rarely a significant, concerted effort to invest in fixing those areas."
Drug cartels and human traffickers have used Facebook to recruit victims. One cartel, in particular, poses the biggest criminal drug threat to the US, per US officials, and used multiple Facebook pages to post photos of violent, graphic scenes and gun imagery. An internal investigation team wanted the cartel banned completely, but the team tasked with doing so never followed up, per the report.
In Ethiopia, groups have used Facebook to incite violence against the Tigrayan people who are victims of ethnic cleansing. That content slipped through the cracks due to a lack of moderators who speak the native language. The company also hadn't translated its "community standards" rules to languages used in Ethiopia, per the Journal.
And most Moroccan Arabic-speaking Facebook moderators aren't able to speak other Arabic dialects, which allowed violent content to remain up.
In most cases, Facebook took down harmful posts only when they garnered public attention and hasn't fixed the automated systems - dubbed "classifiers" - that allowed that content to be published in the first place, per the report.
Facebook did not immediately respond to a request for comment.
Spokesman Andy Stone told the Journal that "in countries at risk for conflict and violence, we have a comprehensive strategy, including relying on global teams with native speakers covering over 50 languages, educational resources, and partnerships with local experts and third-party fact checkers to keep people safe."
The issue is reminiscent of what Facebook acknowledged as a lack of action against groups targeting the minority Rohingya group, victims of ethnic cleansing, in Myanmar in 2018.
Another example was when Facebook employees said the company's removal of posts that included the hashtag al-Aqsa, a mosque in Jerusalem that is the 3rd holiest Islamic site, in May was "entirely unacceptable." The company had cracked down on the name because of a Palestinian militant coalition, the Al-Aqsa Martyrs' Brigades, which has been labeled a terrorist organization by the US and EU.
One employee said the company used both human and automated moderating systems and should have consulted with experts knowledgeable about the Palestinian-Israeli conflict, Buzzfeed reported.
- Ethereum may not have too long to get its high gas fee issue in check, according to JPMorgan
- Terra's LUNA token continues to hold its own even in a crypto bear market — here's why
- SEBI is hiring young graduate professionals for a monthly stipend of ₹60,000, last date is January 25
- HUL spent Rs 1,193 crore on advertising between October and December 2021
- Rohit Sharma, Rishabh Pant and Ravichandran Ashwin included in ICC Men's Test team of the Year 2021