scorecard
  1. Home
  2. policy
  3. economy
  4. news
  5. America's tech battle with China is about to get ugly

America's tech battle with China is about to get ugly

Linette Lopez   

America's tech battle with China is about to get ugly

For technology to change the global balance of power, it needn't be new. It must simply be known.

Since 2017, the Chinese Communist Party has laid out careful plans to eventually dominate the creation, application, and dissemination of generative artificial intelligence — programs that use massive datasets to train themselves to recognize patterns so quickly that they appear to produce knowledge from nowhere. According to the CCP's plan, by 2020, China was supposed to have "achieved iconic advances in AI models and methods, core devices, high-end equipment, and foundational software." But the release of OpenAI's ChatGPT in fall 2022 caught Beijing flat-footed. The virality of ChatGPT's launch asserted that US companies — at least for the moment — were leading the AI race and threw a great-power competition that had been conducted in private into the open for all the world to see.

There is no guarantee that America's AI lead will last forever. China's national tech champions have joined the fray and managed to twist a technology that feeds on freewheeling information to fit neatly into China's constrained information bubble. Censorship requirements may slow China's AI development and limit the commercialization of domestic models, but they will not stop Beijing from benefiting from AI where it sees fit. China's leader, Xi Jinping, sees technology as the key to shaking his country out of its economic malaise. And even if China doesn't beat the US in the AI race, there's still great power, and likely danger, in it taking second place.

"There's so much we can do with this technology. Beijing's just not encouraging consumer-facing interactions," Reva Goujon, a director for client engagement on the consulting firm Rhodium Group's China advisory team, said. "Real innovation is happening in China. We're not seeing a huge gap between the models Chinese companies have been able to roll out. It's not like all these tech innovators have disappeared. They're just channeling applications to hard science."

In its internal documents, the CCP says that it will use AI to shape reality and tighten its grip on power within its borders — for political repression, surveillance, and monitoring dissent. We know that the party will also use AI to drive breakthroughs in industrial engineering, biotechnology, and other fields the CCP considers productive. In some of these use cases, it has already seen success. So even if it lags behind US tech by a few years, it can still have a powerful geopolitical impact. There are many like-minded leaders who also want to use the tools of the future to cement their authority in the present and distort the past. Beijing will be more than happy to facilitate that for them. China's vision for the future of AI is closed-sourced, tightly controlled, and available for export all around the world.


In the world of modern AI, the technology is only as good as what it eats. ChatGPT and other large language models gorge on scores of web pages, news articles, and books. Sometimes this information gives the LLMs food poisoning — anyone who has played with a chatbot knows they sometimes hallucinate or tell lies. Given the size of the tech's appetite, figuring out what went wrong is much more complex than narrowing down the exact ingredient in your dinner that had you hugging your toilet at 2 a.m. AI datasets are so vast, and the calculations so fast, that the companies controlling the models do not know why they spit out bad results, and they may never know. In a society like China — where information is tightly controlled — this inability to understand the guts of the models poses an existential problem for the CCP's grip on power: A chatbot could tell an uncomfortable truth, and no one will know why. The likelihood of that happening depends on the model it's trained on. To prevent this, Beijing is feeding AI with information that encourages positive "social construction."

China's State Council wrote in its 2017 Next Generation Artificial Intelligence Development Plan that AI would be able to "grasp group cognition and psychological changes in a timely manner," which, in turn, means the tech could "significantly elevate the capability and level of social governance, playing an irreplaceable role in effectively maintaining social stability." That is to say, if built to the correct specifications, the CCP believes AI can be a tool to fortify its power. That is why this month, the Cyberspace Administration of China, the country's AI regulator, launched a chatbot trained entirely on Xi's political and economic philosophy, "Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era" (snappy name, I know). Perhaps it goes without saying that ChatGPT is not available for use in China or Hong Kong.

For the CCP, finding a new means of mass surveillance and information domination couldn't come at a better time. Consider the Chinese economy. Wall Street, Washington, Brussels, and Berlin have accepted that the model that helped China grow into the world's second-largest economy has been worn out and that Beijing has yet to find anything to replace it. Building out infrastructure and industrial capacity no longer provides the same bang for the CCP's buck. The world is pushing back against China's exports, and the CCP's attempts to drive growth through domestic consumption have gone pretty much nowhere. The property market is distorted beyond recognition, growth has plateaued, and deflation is lingering like a troubled ghost. According to Freedom House, a human-rights monitor, Chinese people demonstrated against government policies in record numbers during the fourth quarter of 2023. The organization logged 952 dissent events, a 50% increase from the previous quarter. Seventy-eight percent of the demonstrations involved economic issues, such as housing or labor. If there's a better way to control people, Xi needs it now.

Ask the Cyberspace Administration of China's chatbot about these economic stumbles, and you'll just get a lecture on the difference between "traditional productive forces" and "new productive forces" — buzzwords the CCP uses to blunt the trauma of China's diminished economic prospects. In fact, if you ask any chatbot operating in the country, it will tell you that Taiwan is a part of China (a controversial topic outside the country, to say the least). All chatbots collect information on the people who use them and the questions they ask. The CCP's elites will be able to use that information gathering and spreading to their advantage politically and economically — but the government doesn't plan to share that power with regular Chinese people. What the party sees will not be what the people see.

"The Chinese have great access to information around the world," Kenneth DeWoskin, a professor emeritus at the University of Michigan and senior China advisor to Deloitte, told me. "But it's always been a two-tiered information system. It has been for 2,000 years."

To ensure this, the CCP has constructed a system to regulate AI that is both flexible enough to evaluate large language models as they are created and draconian enough to control their outputs. Any AI disseminated for public consumption must be registered and approved by the CAC. Registration involves telling the administration things like which datasets the AI was trained on and what tests were run on it. The point is to set up controls that embrace some aspects of AI, while — at least ideally — giving the CCP final approval on what it can and cannot create.

"The real challenge of LLMs is that they are really the synthesis of two things," Matt Sheehan, a researcher and fellow at the Carnegie Endowment for International Peace, told me. "They might be at the forefront of productivity growth, but they're also fundamentally a content-based system, taking content and spitting out content. And that's something the CCP considers frivolous."

In the past few years, the party has shown that it can be ruthless in cutting out technology it considers "frivolous" or harmful to social cohesion. In 2021, it barred anyone under 18 from playing video games on the weekdays, paused the approval of new games for eight months, and then in 2023 announced rules to reduce the public's spending on video games.

But AI is not simply entertainment — it's part of the future of computation. The CCP cannot deny the virality of what OpenAI's chatbot was able to achieve, its power in the US-China tech competition, or the potential for LLMs to boost economic growth and political power through lightning-speed information synthesis.

Ultimately, as Sheehan put it, the question is: "Can they sort of lobotomize AI and LLMs to make the information part a nonfactor?"

Unclear, but they're sure as hell going to try.


For the CCP to actually have a powerful AI to control, the country needs to develop models that suit its purpose — and it's clear that China's tech giants are playing catch-up.

The e-commerce giant Baidu claims that its chatbot, Ernie Bot — which was released to the public in August — has 200 million users and 85,000 enterprise clients. To put that in perspective, OpenAI generated 1.86 billion visits in March alone. There's also the Kimi chatbot from Moonshot AI, a startup backed by Alibaba that launched in October. But both Ernie Bot and Kimi were only recently overshadowed by ByteDance's Doubao bot, which also launched in August. According to Bloomberg, it's now the most downloaded bot in the country, and it's obvious why — Doubao is cheaper than its competitors.

"The generative-AI industry is still in its early stages in China," Paul Triolo, a partner for China and technology policy at the consultancy Albright Stonebridge Group, said. "So you have this cycle where you invest in infrastructure, train, and tweak models, get feedback, then you make an app that makes money. Chinese companies are now in the training and tweaking models phase."

The question is which of these companies will actually make it to the moneymaking phase. The current price war is a race to the bottom, similar to what we've seen in the Chinese technology space before. Take the race to make electric vehicles: The Chinese government started by handing out cash to any company that could produce a design — and I mean any. It was a money orgy. Some of these cars never made it out of the blueprint stage. But slowly, the government stopped subsidizing design, then production. Then instead, it started to support the end consumer. Companies that couldn't actually make a car at a price point that consumers were willing to pay started dropping like flies. Eventually, a few companies started dominating the space, and now the Chinese EV industry is a manufacturing juggernaut.

The generative-AI industry is still in its early stages in China.

Similar top-down strategies, like China's plan to advance semiconductor production, haven't been nearly as successful. Historically, DeWoskin told me, party-issued production mandates have "good and bad effects." They have the ability to get universities and the private sector in on what the state wants to do, but sometimes these actors move slower than the market. Up until 2022, everyone in the AI competition was most concerned about the size of models, but the sector is now moving toward innovation in the effectiveness of data training and generative capacity. In other words, sometimes the CCP isn't skating to where the puck's going to be but to where it is.

There are also signs that the definition of success is changing to include models with very specific purposes. OpenAI CEO Sam Altman said in a recent interview with the Brookings Institution that, for now, the models in most need of regulatory overhead are the largest ones. "But," he added, "I think progress may surprise us, and you can imagine smaller models that can do impactful things." A targeted model can have a specific business use case. After spending decades analyzing how the CCP molds the Chinese economy, DeWoskin told me that he could envision a world where some of those targeted models were available to domestic companies operating in China but not to their foreign rivals. After all, Beijing has never been shy about using a home-field advantage. Just ask Elon Musk.


To win the competition to build the most powerful AI in the world, China must combat not only the US but also its own instincts when it comes to technological innovation. A race to the bottom may simply beggar China's AI ecosystem. A rush to catch up to where the US already is — amid investor and government pressure to make money as soon as possible — may keep China's companies off the frontier of this tech.

"My base case for the way this goes forward is that maybe two Chinese entities push the frontier, and they get all the government support," Sheehan said. "But they're also burdened with dealing with the CCP and a little slower-moving."

This isn't to say we have nothing to learn from the way China is handling AI. Beijing has already set regulations for things like deepfakes and labeling around authenticity. Most importantly, China's system holds people accountable for what AI does — people make the technology, and people should have to answer for what it does. The speed of AI's development demands a dynamic, consistent regulatory system, and while China's checks go too far, the current US regulatory framework lacks systemization. The Commerce Department announced an initiative last month around testing models for safety, and that's a good start, but it's not nearly enough.

The digital curtain AI can build in our imaginations will be much more impenetrable than iron, making it impossible for societies to cooperate in a shared future.

If China has taught us anything about technology, it's that it doesn't have to make society freer — it's all about the will of the people who wield it. The Xi Jinping Thought chatbot is a warning. If China can make one for itself, it can use that base model to craft similar systems for authoritarians who want to limit the information scape in their societies. Already, some Chinese AI companies — like the state-owned iFlytek, a voice-recognition AI — have been hit with US sanctions, in part, for using their technology to spy on the Uyghur population in Xinjiang. For some governments, it won't matter if tech this useful is two or three generations behind a US counterpart. As for the chatbots, the models won't contain the sum total of human knowledge, but they will serve their purpose: The content will be censored, and the checks back to the CCP will clear.

That is the danger of the AI race. Maybe China won't draw from the massive, multifaceted AI datasets that the West will — its strict limits on what can go into and come out of these models will prevent that. Maybe China won't be pushing the cutting edge of what AI can achieve. But that doesn't mean Beijing can't foster the creation of specific models that could lead to advancements in fields like hard sciences and engineering. It can then control who gets access to those advancements within its borders, not just people but also multinational corporations. It can sell tools of control, surveillance, and content generation to regimes that wish to dominate their societies and are antagonistic to the US and its allies.

This is an inflection point in the global information war. If social media harmfully siloed people into alternate universes, the Xi bot has demonstrated that AI can do that on steroids. It is a warning. The digital curtain AI can build in our imaginations will be much more impenetrable than iron, making it impossible for societies to cooperate in a shared future. Beijing is well aware of this, and it's already harnessing that power domestically, why not geopolitically? We need to think about all the ways Beijing can profit from AI now before its machines are turned on the world. Stability and reality depend on it.


Linette Lopez is a senior correspondent at Business Insider.


Popular Right Now




Advertisement