A new algorithm could catch social-media trolls as they try to influence US elections. Researchers are offering it for free.

Advertisement
A new algorithm could catch social-media trolls as they try to influence US elections. Researchers are offering it for free.
Getty Images
  • Researchers have created a machine-learning technique that can identify disinformation campaigns on social media.
  • The algorithm finds patterns of suspicious posts and accounts, which could help companies shut down these campaigns early on.
  • The method works on various social media platforms, which could enable companies to coordinate their efforts.
Advertisement

Election season now brings an expected threat: foreign meddling via online disinformation campaigns.

To combat this interference, a team of researchers have developed a machine-learning algorithm that spots and flags internet trolls as they pop up. The technique, the programmers say, could help social-media companies quickly shut down coordinated efforts to meddle in US elections.

The tool, described in a study published Wednesday in the journal Science Advances, works by learning to recognize known, common patterns associated with troll activity and disinformation campaigns. Russian troll accounts, for instance, have posted many links to far-right websites, but the content on those sites didn't always match the posts' accompanying text or images. Venezuelan trolls, meanwhile, have often posted fake websites.

Based on its knowledge of this type of pattern, the algorithm then identifies other accounts and posts with similarly suspicious activity.

The researchers said the tool works on a variety of social-media platforms — in tests, it identified trolls on Twitter and Reddit using the same techniques.

Advertisement

A new algorithm could catch social-media trolls as they try to influence US elections. Researchers are offering it for free.
Twitter CEO Jack DorseyREUTERS/Anushree Fadnav

"A major value of the findings is the ability to transfer them over time and platforms," Cody Buntain, an assistant professor in informatics at New Jersey Institute of Technology and a co-author of the study, told Business Insider. "It allows platforms to respond and track in real time," he added.

A coordinated plan of attack

Buntain said many social media companies are probably already using machine learning to identify and remove trolls from their platforms, but his team's model offers a way for companies to coordinate their efforts. It also allows companies to find new campaigns early and predict some elements of future disinformation campaigns, since it can use data from known troll accounts to identify brand-new ones.

"Being able to do forecasting gives the platform some time to do real interventions," Buntain said.

To test their algorithm, researchers trained it on publicly available Twitter data: posts and links made by users linked to disinformation campaigns. The users came from Russia, China, and Venezuela.

Advertisement

Then they added in data from ordinary Twitter accounts as well as more politically active ones to see how effectively their tool could sniff out trolls. In each test, the model identified a majority of posts and accounts involved in disinformation campaigns, even when those accounts were brand-new.

"We're capturing something in the political message that these trolls are pushing," Buntain said.

The model was also effective at separating trolls from authentic, politically engaged Twitter users.

Putting the tool into practice

Buntain and the other researchers are offering their algorithm for free, though they said like to partner with companies to get better insight into how to optimize the tool. At least one company has expressed interest in their work, Buntain added.

Ideally, platforms that adopt the new tool could use its findings to quickly remove or suppress users that post suspicious content, warn human users when they exhibit troll-like behavior, and warn the public about disinformation campaigns as they're found.

Advertisement

However, Buntain also said that companies need to be careful if they adopt this kind of algorithm, since the method isn't 100% accurate.

"The concern — and I think this is legitimate — is, what happens when you're wrong?" he said.

He also noted that companies would need to be vigilant about staying a step ahead of disinformation coordinators that might try to thwart the tool. Russian campaigns, for instance, have grown increasingly sophisticated since early 2015.

Retraining the algorithm every few weeks can help it stay up to date, Buntain added, but when major events or political actors start trending on a platform, it can take a few days for the tool to catch up. Given that lag, human troll-watchers would be still be necessary, too.

{{}}