Western governments enter uncharted territory as they grapple with problem of harmful content on social media

Western governments enter uncharted territory as they grapple with problem of harmful content on social media
The global trend towards regulating online content has provoked fears of censorship and restrictions on free speech and civil liberties. Photo: EPA
  • Governments around the world, including the UK and Singapore, have moved to take a more active role in deciding what constitutes acceptable content
In 2018 UN investigators concluded that the Myanmar military had for years used social media platform Facebook as a tool to enable ethnic cleansing of the Muslim Rohingyas. Hundreds of Facebook pages had been set up, populated with content aimed at inciting hatred against the Rohingyas and forcing hundreds of thousands to flee the genocide taking place.

However, the tipping point on what seems to be acceptable internet discourse and content appeared to come last month, when an Australian gunman live-streamed a mass shooting at two Christchurch mosques. Three weeks later, sweeping legislation conceived and passed in five days was enacted in Australia to punish social media companies that fail to remove “abhorrent, violent material” on their platforms “expeditiously”.

Under Australia’s new legislation, employees of social media sites face up to three years in prison and the companies involved could be fined up to 10 per cent of their annual profit.

“It’s dystopian, but I think China’s [regulation method] is going to be the future,” said Aram Sinnreich, an associate professor at American University's School of Communication, referring to the “Great Firewall” that blocks foreign social media platforms and censors content deemed politically sensitive or disruptive to public order.


For the West and many other countries, government regulation of social media is uncharted territory. Proposals to regulate for the sake of public interest have been met with fears that it would impede freedom of expression, even as platforms like Facebook – which eventually removed the anti-Rohingya sites – agree that internet companies should be “accountable for enforcing standards on harmful content”.

Australia is not alone. Governments around the world, including the UK and Singapore, have moved to take a more active role in deciding what constitutes acceptable content in an era where social media content is king and anything can be shared at the push of a button.

Experts say that the core of the issue is that while internet companies like Facebook and some governments want to do the right thing in the name of public interest, nobody has come up with a solution that would do so without potentially affecting rights like free speech.

The dangers of allowing a platform to self-regulate were highlighted when a Bloomberg report recently found that video-streaming platform YouTube’s top executives, focused on increasing the amount of time users spent watching its videos, allowed its algorithm to recommend videos bordering on hate speech or pushing conspiracy theories as these videos tended to attract large numbers of views.

“At the beginning, governments intended for platforms to self-regulate, but obviously this has not worked,” said Fu King-wa, an associate professor at the University of Hong Kong’s Journalism and Media Studies Centre.

“Now governments around the world are taking it seriously to [protect] social media in their local communities, not just from inappropriate content, but also from disinformation especially when it comes from another country.”

Singapore, always circumspect about how media can be used to inflame religious and racial fault lines, earlier this month unveiled a draft law aimed at punishing individuals and social media sites from maliciously spreading online falsehoods.

Individuals in Singapore who fabricate news or refuse to comply when asked to remove falsehoods could face a decade in prison or fines of up S$20,000 (US$14,765). Firms can be fined up to S$1 million.

The UK last week released the Online Harms White Paper which proposed broad social media regulations, including instituting a new regulator with enforcement powers to fine companies and executives who breach the “code of practice” by failing to remove content promoting terrorism, hate crimes and self-harm.

The origins of the fight against inappropriate content and disinformation can be traced back to 2016, amid discoveries of Russian interference and hoards of fake news sites originating in Macedonia, aimed at influencing the US elections and Brexit. That was when the potential for social media platforms to be used maliciously to influence society came to the forefront of social consciousness.

Social media platforms like Facebook and Twitter are inherently vulnerable to disinformation because they allow individuals to target advertisements and content at specific groups of people based on characteristics like age, gender, education-level and even political and religious beliefs.

“The kind of disinformation campaigns we’ve seen are endemic to the platform and cannot be stopped without changing the entire platform and business models,” said Sinnreich.

He pointed out that the worst case scenario for democratic countries would be if governments implement social media regulations, thereby creating the apparatus for a totalitarian state to exist, allowing political leaders to control access to the internet.

The complex nature of how social media platforms are structured have led some to believe that these companies are now too big to fail. US presidential candidate Elizabeth Warren has called for the government to break up big tech companies like Amazon, Google and Facebook. Her argument is that they wield too much power by monopolising the market and using private information for profit, which in turn allows foreign actors to exert social control.

Facebook and other platforms have moved to tighten advertising rules to prevent such misuse. In March Facebook chief executive Mark Zuckerberg called for governments to play a more active role in regulating the internet.

“Lawmakers often tell me we have too much power over speech, and frankly I agree,” said Zuckerberg in a statement. “I’ve come to believe that we shouldn’t make so many important decisions about speech on our own.”

The Facebook founder proposed a standardised approach when it comes to deciding what counts as harmful content and to regulate political advertisements online.

The question of whether social media can and should be regulated has sparked debate around the world, primarily because such regulations are uncharted territory. Traditionally, in many countries, free speech, the press and advertising are considered separate and may have different regulations. But social media platforms today are a combination of everything – from interpersonal communication, content publishing and even advertising.

“Platforms are kind of like vertically-integrated chimera, like a beast made up of many different other beasts,” said Sinnreich.


Imposing a regulation on a platform might be applicable for one type of content, such as advertising, but the same regulation may be inappropriate for other functions such as private messages between users, he pointed out.

“Platforms are so opaque that it’s impossible for regulators to figure out where one begins and the other ends … it’s hard to regulate uniformly, but it’s also hard to regulate piecemeal,” he said.

The global trend towards regulating online content has provoked fears of censorship and restrictions on free speech and civil liberties. In the case of broadly-defined regulation, this might inadvertently cause platforms to err on the side of caution and go overboard in censoring themselves.

“When lawmakers create new rules that have never been tested by courts – like Australia's new law or the rules proposed in the UK's White Paper – and then tell platforms to enforce them, we can only expect that a broad swathe of perfectly legal speech is going to disappear,” said Daphne Keller, director of intermediary liability at the Stanford Centre for Internet and Society.

“Even lawmakers telling platforms to enforce existing laws should expect bad results when they impose such heavy penalties on platforms. The incentive is to silence anything that falls in a grey area, or anything that an accuser claims is illegal.”

Locked out of an online life
In some cases, critics say such laws and legislation place too much power in the hands of governments to decide what kind of content is acceptable.

Singapore’s draft law, for example, has been criticised because it relies on the Singapore government deciding whether the news in question is factual, and gives it the power to issue mandatory corrections.

The government has said the law is targeted only at those who deliberately fabricate news and those who spread it without knowing the truth “don’t have to be concerned”.

The Southeast Asian island state already ranks poorly in terms of press freedom, at No. 151 out of 180 countries and regions, compared to China’s ranking of 176th.

Ultimately, comprehensive social media regulations should be accompanied by a mechanism where platforms and individuals have the opportunity to challenge removals and penalties can be imposed on those who abuse the system.

“Every country faces the problem of regulating and protecting speech. Ultimately, it’s a constitutional and political issue,” said Lei Ya-wen, assistant professor of sociology at Harvard University.

“One major difference between how China deals with the problem and how many other countries deal with it is whether citizens have functioning legal channels to dispute the government’s administrative decisions, judicial decisions, regulations, and laws.”

Additional reporting by Meng Jing