Twitter is testing a system that lets users label false or misleading tweets by politicians, taking a cue from sites like Wikipedia and Reddit.

Advertisement
Twitter is testing a system that lets users label false or misleading tweets by politicians, taking a cue from sites like Wikipedia and Reddit.
Jack Dorsey
  • Twitter is testing a new feature that would show warnings below tweets from politicians and public figures that have been flagged as misleading by users, according to NBC News.
  • Large, brightly colored labels would appear below tweets reported as "harmfully misleading," according to NBC.
  • One version being considered would also rely on a points-based system to prevent trolls from taking over and encourage users to "contribute in good faith and act like a good neighbor."
  • This latest experiment suggests Twitter will continue to rely on community input as it tries to fight misinformation.
  • Visit Business Insider's homepage for more stories.

Twitter is testing out new ways to warn users of misinformation posted by politicians and public figures that would rely on community input "like Wikipedia," according to NBC News.

Advertisement

The experimental feature would display large, brightly colored labels directly below tweets flagged by fact-checkers, journalists, and potentially other users as likely to be "harmfully misleading," according to a leaked demo seen by NBC. The labels would include a correction and an option for users to chime in with their own vote.

One version would also count those votes differently based on how many "points" a user has earned, which would be awarded to those who "contribute in good faith and act like a good neighbor," according to the demo.

While flagging content, users would be asked three questions, according to NBC: whether the tweet is "likely" or "unlikely" harmfully misleading, how many users out of 100 would agree with them, and why the tweet is misleading.

The points system and questions could be an effort to prevent trolls and partisans from trying to flag opinions they disagree with.

Advertisement

"We're exploring a number of ways to address misinformation and provide more context for Tweets on Twitter. This is a design mockup for one option that would involve community feedback. Misinformation is a critical issue and we will be testing many different ways to address it," a Twitter spokesperson told Business Insider in an emailed statement.

Twitter and other social media platforms have been struggling to combat the spread of misinformation on their platforms, especially when it comes to political content, and they're facing intense pressure to sort things out ahead of the 2020 presidential election.

Twitter recently announced new rules banning "synthetic or manipulated" content that could cause harm, and said it would begin labeling various types of manipulated content beginning on March 5.

As social media platforms race to find effective solutions, Twitter has looked to users to a greater extent than companies like Facebook and YouTube. Last November, it put out a call for public feedback that the company said helped inform its new manipulated media policy. This latest experiment suggests that Twitter is continuing to take cues from Wikipedia, Reddit, and other online platforms where users play a significant role in moderating content.

Exclusive FREE Report: The Stories Slide Deck by Business Insider Intelligence

Advertisement
{{}}