It only took 36 hours for these students to solve Facebook's fake news problem

Advertisement

Programmers, students

Nabanita De

Anant Goel, Nabanita De, Catherine Craft and Mark Craft

Facebook is facing increasing criticism over its role in the 2016 election thanks to allowing propaganda lies disguised as news stories to spread on the social media site unchecked.

Advertisement

The spreading of false information during the election cycle was so bad that President Barack Obama called Facebook a "dust cloud of nonsense."

And Business Insider's Alyson Shontell called Facebook CEO Mark Zuckerberg's reaction to this criticism "tone deaf." His public stance is that fake news is such a small percentage of the stuff shared on Facebook that it couldn't have had an impact. This even while Facebook has officially vowed to do better and insisted that ferreting out the real news from the lies is a difficult technical problem.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Just how hard of a problem is it for an algorithm to determine real news from lies?

Not that hard.

Advertisement

During a hackathon at Princeton University, four college students created one in the form of Chrome browser extension in just 36-hours. They named their project "FiB: Stop living a lie."

The students are Nabanita De, a second year Masters in Computer Science student at UMass Amherst; Anant Goel a freshmen in Purdue University; Mark Craft, a sophomore at University of Illinois Urbana-Champaign (UIUC) and
Catherine Craft, a sophomore at UIUC.

Their news feed authenticity checker works like this, De tells us:

"It classifies every post, be it pictures (Twitter snapshots), adult content pictures, fake links, malware links, fake news links as verified or non-verified using artificial intelligence.

"For links, we take into account the website's reputation, also query it against malware and phishing websites database and also take the content, search it on Google/Bing, retrieve searches with high confidence and summarize that link and show to the user. For pictures like Twitter snapshots, we convert the image to text, use the usernames mentioned in the tweet, to get all tweets of the user and check if current tweet was ever posted by the user."

Advertisement

The browser plug-in then adds a little tag in the corner that says whether the story is verified or not.

For instance, it discovered that this news story promising that pot cures cancer was fake, so it noted that the story was "not verified."

But this news story about the Simpsons being bummed that the show predicted the election results? That was real and was tagged "verified."

Advertisement

The students have released their extension as an open source project, so any developer with the know how can install it, and tweak it.

A Chrome plug-in that labels fake news obviously isn't the total solution for Facebook to police itself. Ideally, Facebook will remove fake stuff completely, not just add a tiny, easy to miss tag and not ask people to install a browser extension.

But the students show that algorithms can be built to determine within reasonable certainty which news is true and which isn't, and that something can be done to put that information in front of readers as they consider clicking..

Facebook, by the way, was one of the companies sponsoring this hackathon event.

Word is that many Facebook employees are so upset about this situation, a group of renegade employees inside the company is taking it upon themselves to figure out how to fix this issue, BuzzFeed reports. Maybe FiB will give them a head start.

Advertisement

NOW WATCH: You ride this bike on your stomach and it's meant to be healthier and safer