China wants its AI to only say nice things about the communist party. Good luck with that.
- AI systems must reflect China's "socialist core values," according to new rules reported by The New York Times.
- Proposed regulations could make it harder for Alibaba, Baidu and other Chinese tech companies to chase OpenAI.
China's unelected Communist ruling party just waded into this technological quagmire with a proposed set of restrictive rules, The New York Times reported on Monday.
The rules, from the Cyberspace Administration of China, cover generative AI, a new type of technology that powers ChatGPT and other nascent products that have taken the Western world by storm in recent months.
Companies must heed the Chinese Communist Party's censorship rules, which forbid discussion of certain sensitive history and ban any criticism of the country's leaders. According to the proposed restrictions, content generated by these AI models must reflect "socialist core values" and avoid information that undermines "state power" or national unity, The New York Times reported.
Companies should also ensure their chatbots create text and images that are truthful and respect intellectual property, and will be required to register their algorithms with regulators, the newspaper added.
The draft rules could make it difficult for Chinese tech companies to catch up with US rivals, such as Microsoft, OpenAI, Google, Facebook, and Anthropic, which have taken an early lead in the generative AI race.
Tencent, Bytedance, Baidu, Alibaba, Sensetime, and other big Chinese tech companies have the technical prowess to develop their own generative AI models. But restrictions on what these models can say will likely slow development and limit how widely the technology can be distributed in the country.
And then there's the issue that AI tends to "hallucinate." Large Language Models, such as ChatGPT, can make up fake information to satisfy user requests. Academics have warned about the potential for misinformation from these platforms. For example, Insider's Samantha Delouya asked the language tool to write a news story while testing it, and it spat out fake quotes from auto industry CEO Carlos Tavares.
AI researchers often don't know why models generate certain content. Similar image models can quickly produce images of famous people doing unusual things that they didn't do in reality. One famous example, which the internet dubbed "Balenciaga Pope," showed Pope Francis wearing a large puffer jacket, tricking some into thinking the image was a real photograph.
Unpredictable technology like this could be a nightmare for China's leaders, who try to tightly control the political narrative.
ChatGPT, from the San Francisco-based company OpenAI, was never officially available in China — Google and Facebook's products are mostly banned in China as well, although Apple operates in the country after reportedly making concessions.
But ChatGPT knockoffs quickly cropped up on Chinese social media in recent months, leading the Chinese government to reportedly order websites to remove mentions of ChatGPT and any third-party workarounds to the AI service.
- Discover what's new in iOS 17: Your ultimate guide to the latest features
- Hyderabad-based ethnic retailer Sai Silks Kalamandir IPO subscribed 4.4x
- After JP Morgan, Indian bonds set to enter other global bond indices; capital raising for infra push to get a boost
- 10 calcium-rich foods to ward Off calcium deficiency
- Amazon introduces next-gen of Echo smart devices with new-age AI