No smoking, no tattoos, no bikinis: inside China’s war to ‘clean up’ the internet

Advertisement
No smoking, no tattoos, no bikinis: inside China’s war to ‘clean up’ the internet
On the front lines of China’s war to police the internet, companies employ armies of censors to adjudicate the sea of content produced each day for and by the world’s biggest online population. Illustration by Perry Tse
  • China’s social media companies employ thousands to censor content that falls afoul of the country’s stringent regulations governing the internet.
  • While AI is used to remove banned content, many decisions are taken by humans, especially if they involve context.
Advertisement
When are bikinis allowed on China’s live-streaming apps, and when are they not?

As content moderators at Inke, one of China’s largest live-streaming companies with 25 million users, Zhi Heng and his brigade of 1,200 mostly fresh-faced college graduates have seconds to decide whether the two-piece swimwear on their screens breaches rules governing use of the platform.

Here on the front lines of China’s war to police the internet, companies employ armies of censors to adjudicate the sea of content produced each day for and by the world’s biggest online population.

As of the end of last year, almost 400 million people in China had done the equivalent of a Facebook Live and live-streamed their activities on the internet. Most of it is innocuous: showing relatives and friends back home the sights of Paris or showing nobody in particular what they are having for lunch or dinner.There are also professional “live-streamers” who broadcast for a living, much like YouTubers do on the Google-owned platform. Many of these pro streamers use the app to sell merchandise, others sing sappy love songs in return for virtual rewards. If one were to count short-form videos, messaging apps, online forums and other formats, the amount of content being produced each day would be impossible to censor without the help of technology.

“You need to really focus on your work,” Zhi Heng, who heads Inke’s content safety team, said in an interview at the company’s offices in a high-tech industrial park in Changsha, central China. “You cannot let past anything that is against the law and regulations, against mainstream values and against the company’s values.”

Advertisement

Inke agreed to show the Post how its content moderation operations work, the first time that the Hong Kong-listed company has given an interview about a topic that is usually regarded as sensitive by companies and regulators.

Headquartered in Beijing with a second base in Changsha, Inke employs artificial intelligence algorithms and recognition software help human moderators do their jobs.

AI is employed to handle the grunt work of labelling, rating and sorting content into different risk categories. This classification system then allows the company to devote resources in ascending order of risk. A single reviewer can monitor more low-risk content at one time, say cooking shows, while high-risk content is flagged for closer scrutiny.

Which brings us back to the bikini question. The answer to that, it turns out, is something AI is not yet very good at: context.

To an algorithm, a bikini is a bikini. But to a human, a bikini in different settings can mean very different things. So, a bikini at a swimming pool with children running about? Fine. Skimpy two-piece in a bedroom with soft, romantic background music? Probably not.The most-censored activity on Inke’s live-streaming platform, though, is smoking, which is not allowed because the authorities see it as promoting an unhealthy lifestyle. Showing excessive tattoos is also a no-no.

Advertisement

China closely patrols online activity and censors content critical of the ruling Communist Party and politically sensitive terms such as the Dalai Lama, Tiananmen crackdown and Falun Gong. Beijing justifies the “Great Firewall”, as the system of censorship and access control is dubbed, as the right of every country to control its domestic internet space – a concept it calls “cyber sovereignty”.


Former US president Bill Clinton was wrong when he likened China’s attempts to control its domestic internet to nailing Jello to the wall, as it has created tools of social control that “our parents couldn’t have dreamed of,” according to Kyle Langvardt, an associate law professor at the University of Detroit Mercy whose research focuses on the First Amendment to the US Constitution and related issues.

“Overall, of course, I find China’s censorship policies extremely disturbing – and that’s particularly true when it comes to the government’s efforts to stifle controversy and political dissent. So the downside, from my perspective, is overwhelming,” Langvardt said. “There are upsides, however. If content moderation helps to prevent real-world violence incited by viral content, for example (see Myanmar, Sri Lanka), that’s important.”

Globally, governments are increasingly defining the boundaries of acceptable online discourse in order to curb the ability of hate groups, conspiracy theorists and propagandists to use social media and other internet platforms to spread lies and incite violence.

Facebook admitted last November that it did not do enough to prevent its platform from being used to “foment division and incite offline violence” in Myanmar against the Muslim Rohingya minority. That came after the United Nations described the events surrounding the mass exodus of more than 700,000 Rohingya people from Myanmar as a “textbook example of ethnic cleansing.”
Advertisement

Facebook was again heavily criticised last month after a gunman streamed his attacks on a mosque in Christchurch, New Zealand, that left 50 people dead. Though the account was quickly shut down, a video of the massacre circulated widely online.

Australia swiftly passed legislation threatening huge fines and prison time for executives of social media companies that fail to quickly remove violent posts. Singapore is debating legislation to tackle “fake news”. The UK on Monday proposed new rules to hold senior managers of technology companies personally liable for failing to address online harm.On March 31, Facebook founder Mark Zuckerberg published an open letter inviting governments and regulators to play “a more active role” in deciding what is harmful content, to help ensure election integrity, privacy and data portability.

Back in Changsha at the offices of Inke, the content safety team is mid-way through the day shift. It is very quiet on the spacious floor, with most of the reviewers wearing headphones and staring at their screens.

The team is the biggest in Inke, accounting for about 60 per cent of its workforce. The content moderators work to detailed regulations on what is allowed and what has to be removed. Based on guidelines published by the China Association of Performing Arts, the training manual is updated weekly to take in the latest cases, making it a living document of what China deems objectionable content. The company also works with experts to advise it on where the legal boundaries are on tricky subjects.


Advertisement
Inke declined to show the training manual to the Post, but agreed to explain the principles behind how the content is handled.

The highest-risk content includes politically sensitive speech, sexual acts, violence, terrorism and self-harm, according to Zhi Heng. Depending on the severity of the infraction, the content moderators can issue a warning, block or blacklist the account.

Moderators are helped by the fact that live streams are not, in reality, truly “live”, but come with a built-in 10- to 15-second lag. It is in this narrow window that the moderator has to decide whether to allow questionable content to be broadcast.

Zhi Heng, who worked in quality control at a power plant before his present job, has other tools at his disposal.


When residents began gathering to protest a local government plan to build a refuse incineration plant, Inke pinpointed the site and using positioning software to block all streaming within a 10-kilometre radius.
Advertisement

Penalties can be swift and severe for non-compliance by the platforms.

Bytedance chief executive Zhang Yiming had to issue a public apology in April 2018 after the company was ordered by the central government to close its popular Neihan Duanzi app for “vulgar content”. The company’s Jinri Toutiao news aggregation app was also ordered to be taken down from various app stores for three weeks.Bytedance pledged to expand its content vetting team from 6,000 to 10,000 staff, and permanently ban creators whose content was “against community values”.

Despite the boom in demand for content moderators, the job itself is mind-numbing and does not pay very well. It is characterised by long hours of enduring bad singing, bad jokes and boring monologues.

Of the 1,200 staff members in Zhi Heng’s department, about 200 are full-time employees while the rest are on contract. Starting pay is 3,000 yuan per month, or US$3 per hour, compared with the US$15 minimum wage in New York.

Many new hires quit before completing the mandatory one-month boot camp for recruits. Many more leave within six months. The turnover rate for the content moderator team stood at 10 per cent last year, according to Zhi Heng. That still compares favourably with a 20 per cent overall rate for China’s labour market, according to a survey by online recruitment platform 51job.com.

Inke listed on the Hong Kong stock exchange in July 2018. The company made a profit of 1.1 billion yuan (US$164 million) last year.
Advertisement

Some applicants boast at the job interview that they can handle the graveyard shift from midnight to 8am because they often pull all-nighters playing video games. They soon find out that the job is not at all like that.

“There is only a tiny portion of content that is good, the majority of the rest is below mediocre,” Zhi Heng said. “When you watch this for too long, it could make you question the meaning of your life.”

Asked what the job meant to him, Zhi Heng pondered the question for a long while before replying wistfully that he had asked himself that many times.

“We are like public janitors,” he said. “Except what we clean are neither streets nor residential communities, but cyberspace.”
{{}}