Facebook is being blamed for Trump's election - but Mark Zuckerberg's response seems tone deaf

Advertisement

Mark Zuckerberg

Getty / Stephen Lam

Facebook CEO Mark Zuckerberg

Half the nation is blaming Facebook for Donald Trump's election.

Advertisement

And Facebook feels that's very unfair!

The argument is that Facebook now plays a huge role in the distribution of information. Its 2 billion active users may read traditional news sources, like The New York Times and Business Insider. But they aren't typically visiting those websites directly. Instead, they're scrolling through Facebook's news feed and reading articles that get shared by friends.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

The problem is that Facebook users aren't always good at distinguishing legitimate news sources from satire, propaganda, or just plain false information. And if bad information goes viral, it can negatively influence the public's opinion.

The spreading of false information during the election cycle was so bad, that President Obama called Facebook a "dust cloud of nonsense."

Advertisement

"People, if they just repeat attacks enough, and outright lies over and over again, as long as it's on Facebook and people can see it, as long as its on social media, people start believing it," he said.

But Mark Zuckerberg doesn't seem to get that.

"Personally, I think the idea that fake news on Facebook - it's a very small amount of the content - influenced the election in any way is a pretty crazy idea," Zuckerberg said on Thursday night.

That seems a bit tone deaf.

If Facebook wants to be a platform where billion of people regularly find and share news, then it needs to accept some of the responsibility that comes with that power. That means coming up with some guidelines to help spread information responsibly.

Advertisement

A messy business

It's not hard to see why Facebook is reluctant to do this. The internet was built on the legal foundation that online companies are not liable for third-party content displayed on their sites.

Acting as an information gatekeeper and making editorial decisions is a difficult and messy business. Facebook learned this earlier this year when contractors it employed allegedly suppressed politically conservative articles from the trending news section.

Facebook's response to the controversy was to fire the contractors and let its algorithm decide which stories appear in the trending news box. Clearly more work needs to be done.

But there's good news: Facebook doesn't need to reinvent the wheel. Google has already spent two decades battling the distribution of bad content online. Facebook can adopt this by:

  1. Assessing the quality of the content being shared (and the authority of the people who are sharing it)
  2. Bury content that doesn't meet quality standards

Google has built an algorithm that prioritizes the quality and relevance of an article over everything else. Anyone can write anything online. But not any piece of content will show up in the first few pages of a Google search result.

Advertisement

It's not perfect - just ask former US Senator Rick Santorum, who was the victim of the most famous Google bomb, in which a webpage characterizing the legislator in vulgar terms rose to the top of search results. But Google takes its responsibility surfacing the right information seriously (in part because its business depends on it) and by and large people trust that the top results on Google will be legitimate.

The vetting game

Google also examines the source of the article carefully. It has an application process for publishers that want to be part of its Google News or AMP (accelerated mobile pages) programs. Then it has a team of Googlers carefully review each applicant and reject the sites if they don't meet quality standards. Sites are evaluated on a number of things, including their "authority" on a subject matter, their "journalistic standards," their ability to show "accountability" for content through proper attribution and author bio pages, and more. If a site is rejected, it can reapply a few months later.

Facebook's Instant Articles, by contrast, don't seem to require much vetting at all. Instant Articles launched in closed beta, with a few Facebook-approved partners. But now it appears open to almost any site that has the required tech specifications. The only requirement listed on the Instant Article's FAQ page for publishers is that the content does not run afoul of Facebook's community standards, which bar things like sexually explicit content and violent threats.

Now that Facebook is such an important part of the news cycle, its vetting process needs to mature. It should evaluate the person who is sharing a piece of content on Facebook, weigh the quality of the link being shared, and then determine how far a friend's status message should really spread.

Coming up with this sort of process isn't censorship. It's just being responsible.

Advertisement

This is an editorial. The opinions and conclusions expressed above are those of the author.

NOW WATCH: This hidden iPhone feature will boost your reception