Amazon, Facebook, Twitter, and YouTube are all facing moderation issues - here's how America's tech giants are struggling to police their massive platforms

Advertisement
Amazon, Facebook, Twitter, and YouTube are all facing moderation issues - here's how America's tech giants are struggling to police their massive platforms

Jeff Bezos Amazon warehouse horror stories

Business Insider

CEO Jeff Bezos, among the products sold by his online marketplace Amazon.

Advertisement
  • The Wall Street Journal published an investigation on Friday into Amazon's "struggle to police its site." The report found that Amazon is allowing thousands of "banned, unsafe, or mislabeled" items to be sold on its marketplace.
  • YouTube, Facebook, and Twitter similarly struggle to moderate content, illustrating the larger issue of tech platforms swelling to a size where current guidelines for oversight are no longer adequate.
  • Visit Business Insider's homepage for more stories.

Amazon sells hundreds of millions of items on its online marketplace. With such a massive quantity of products, the question becomes: How does the company regulate what's being sold?

The Wall Street Journal published an investigation on Friday into Amazon's "struggle to police its site." The Journal found over 4,000 items being sold on Amazon that the publication says were labeled deceptively, determined to be unsafe by federal agencies, or banned by federal regulators. Those items included: a children's toy xylophone with four times the lead allowed by the federal government, knockoff magnetic children's toys that can lead to internal damage if ingested, and a motorcycle helmet that was falsely listed as US Department of Transportation-certified.

Read more: Amazon was caught selling thousands of items that have been declared unsafe by federal agencies

"The challenge for Amazon is that what built the marketplace - it being so open and welcoming to new sellers - has also meant that today Amazon is really having a hard time enforcing many of its own rules, because it just physically cannot allocate enough human power to police many of these rules," Juozas Kaziukenas, an e-commerce analyst, told the Journal.

Advertisement

That issue is mirrored at tech giants like YouTube, Twitter, and Facebook. These enormous, open tech platforms have struggled to weed through reams of problematic content, from Facebook posts calling for the genocide of the Rohingya minority Muslim group in Myanmar to YouTube videos calling survivors of mass shootings at US schools "crisis actors."

Here are the issues tech giants like Amazon, YouTube, Facebook, and Twitter are facing, and how they're attempting to regulate their platforms.

Exclusive FREE Slide Deck: 40 Big Tech Predictions for 2019 by Business Insider Intelligence

{{}}

Amazon was caught selling thousands of items that have been declared unsafe by federal agencies. In response, the company says it uses machine learning and natural language processing to monitor the items it sells.

Amazon was caught selling thousands of items that have been declared unsafe by federal agencies. In response, the company says it uses machine learning and natural language processing to monitor the items it sells.

The Wall Street Journal published an investigation into Amazon's "struggle to police its site," finding over 4,000 items being sold on Amazon that the Journal says were labeled deceptively, determined to be unsafe by federal agencies, or banned by federal regulators. Items included unsafe children's toys.

Amazon has now released a statement in response to the Journal article clarifying its practices in monitoring the items sold on its marketplace:

  • New seller accounts are vetted via proprietary machine learning technology.
  • Amazon compliance specialists review the documents submitted by new sellers confirming their products follow regulations (e.g. that children's toys follow Consumer Product Safety Comission regulations).
  • Products being sold on Amazon are continuously analyzed using machine learning and natural language processing tools.
  • Amazon takes safety issue reports from customers and regulatory agencies into consideration and subsequently launches investigations.

Source: Wall Street Journal, Amazon

YouTube, which has more than 1 billion users, has been criticized for not moving swiftly to take down homophobic content and videos denying the veracity of mass shootings at schools. The company has updated its hate speech policies and currently employs 10,000 human moderators.

YouTube, which has more than 1 billion users, has been criticized for not moving swiftly to take down homophobic content and videos denying the veracity of mass shootings at schools. The company has updated its hate speech policies and currently employs 10,000 human moderators.

YouTube has repeatedly faced issues with regulating the content that appears on its site. Following the February 2018 Parkland, Florida, mass shooting at Marjory Stoneman Douglas High School, videos appeared on YouTube calling one of the survivors a crisis actor. The same account which posted those videos, InfoWars, also uploaded videos calling the Sandy Hook Elementary School mass shooting in 2012 a hoax.

The most recent example came in June, when Vox journalist Carlos Maza spoke out about conservative media star Steven Crowder, who had repeatedly mocked Maza's sexual orientation and hurled ethnic slurs at him in multiple YouTube videos.

Five days after Maza took to Twitter to highlight Crowder's attacks, YouTube announced that Crowder was not in violation of any policies. One day later, YouTube reneged and demonetized Crowder's videos.

YouTube then updated its hate speech policy, "prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion." The content of YouTube's policy update was criticized as weak and its timing suspect.

Source: Wired, The Guardian, Business Insider

Advertisement

Facebook has been criticized for failing to adequately moderate both hate speech and calls for genocide of the Rohingya minority Muslim group in Myanmar. And while the company employs human content moderators, that comes with its own set of issues.

Facebook has been criticized for failing to adequately moderate both hate speech and calls for genocide of the Rohingya minority Muslim group in Myanmar. And while the company employs human content moderators, that comes with its own set of issues.

In August 2018, a Reuters investigation found that Facebook did not adequately moderate both hate speech and calls for genocide of the Rohingya minority Muslim group in Myanmar. According to the Human Rights Watch, nearly 700,000 Rohingya refugees have fled the Rakhine State of Burma because of a military-led ethnic cleansing campaign since August 2017.

ProPublica has called Facebook's enforcement of hate speech rules "uneven," explaining that "its content reviewers often make different calls on items with similar content, and don't always abide by the company's complex guidelines."

Facebook's use of human content moderators came under scrutiny in June when The Verge reported that Keith Utley — a 42-year-old employee at a Facebook moderation office operated by Cognizant in Tampa, Florida — died of a heart attack on the job a year prior.

The Verge also reported the job site was home to poor working conditions, where content moderators are tasked with watching hours of graphic footage that had been posted on Facebook in an effort to keep the content clean of anything that violates its terms of service.

Fast Company reported five days after the Verge piece that Facebook was expanding its tools for content moderators, with the goal of buffering the negative psychological effects of consuming disturbing Facebook posts, and that these tools were underway before the Verge story.

Sources: ProPublica, Reuters, Fast Company, Business Insider, The Verge

Twitter has made policy changes to help curb abuse, but it still has a long-standing white supremacist problem.

Twitter has made policy changes to help curb abuse, but it still has a long-standing white supremacist problem.

Twitter says it's been making progress when it comes to ridding its site of spam and abuse. The platform — which has 300 million monthly users, according to Recode — said in an April blog post that rather than relying on users to flag abuse, it's implemented technology that automatically flags offensive content.

Twitter said that "by using technology, 38% of abusive content that's enforced is surfaced proactively for human review instead of relying on reports from people using Twitter."

Yet, Twitter just can't seem to figure out how to block white supremacists. Vice reported that a Twitter employee said that "Twitter hasn't taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians."

"The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material," Vice reported.

Source: Vice, Twitter, Recode

Got a tip on content moderation and product regulation at major tech companies? Reach this article's writer, Rebecca Aydin, via email at raydin@businessinsider.com

Advertisement