sign up log in
Want to go ad-free? Find out how, here.

Stephen Roach thinks China is poorly positioned to capitalise on recent advances with large language AI models

Public Policy / opinion
Stephen Roach thinks China is poorly positioned to capitalise on recent advances with large language AI models
eye in AI
Image sourced from Shutterstock.com

In his now-classic 2018 book, AI Superpowers, Kai-Fu Lee threw down the gauntlet in arguing that China poses a growing technological threat to the United States. When Lee gave a guest lecture to my “Next China” class at Yale in late 2019, my students were enthralled by his provocative case: America was about to lose its first-mover advantage in discovery (the expertise of AI’s algorithms) to China’s advantage in implementation (big-data-driven applications).

Alas, Lee left out a key development: the rise of large language models and generative artificial intelligence. While he did allude to a more generic form of general-purpose technology, which he traced back to the Industrial Revolution, he didn’t come close to capturing the ChatGPT frenzy that has now engulfed the AI debate. Lee’s arguments, while making vague references to “deep learning” and neural networks, hinged far more on AI’s potential to replace human-performed tasks rather than on the possibilities for an “artificial general intelligence” that is close to human thinking. This is hardly a trivial consideration when it comes to China’s future as an AI superpower.

That’s because Chinese censorship inserts a big “if” into that future. In a recent essay, Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher – whose 2021 book hinted at the potential of general-purpose AI – make a strong case for believing we are now on the cusp of a ChatGPT-enabled intellectual revolution. Not only do they address the moral and philosophical challenges posed by large language generative models; they also raise important practical questions about implementation that bear directly on the scale of the body of knowledge embedded in the language that is being processed.

It is precisely here that China’s strict censorship regime raises alarms. While there is a long and rich history of censorship in both the East and the West, the Communist Party of China’s Propaganda (or Publicity) Department stands out in its efforts to control all aspects of expression in Chinese society – newspapers, film, literature, media, and education – and steer the culture and values that shape public debate.

Unlike the West, where anything goes on the web, China’s censors insist on strict political guidelines for CPC-conforming information dissemination. Chinese netizens are unable to pull up references to the decade-long Cultural Revolution, the June 1989 tragedy in Tiananmen Square, human-rights issues in Tibet and Xinjiang, frictions with Taiwan, the Hong Kong democracy demonstrations of 2019, pushback against zero-COVID policies, and much else.

This aggressive editing of information is a major pitfall for a ChatGPT with Chinese characteristics. By wiping the historical slate clean of important events and the human experiences associated with them, China’s censorship regime has narrowed and distorted the body of information that will be used to train large language models by machine learning. It follows that China’s ability to benefit from an AI intellectual revolution will suffer as a result.

Of course, it is impossible to quantify the impact of censorship with any precision. Freedom House’s annual Freedom on the Net survey provides a qualitative assessment. For 2022, it awards China the lowest overall “Internet Freedom Score” from a 70-country sample.

This metric is derived from answers to 21 questions (and nearly 100 sub-questions) that are organized into three broad categories: obstacles to access, violations of user rights, and limits on content. The content sub-category – reflecting filtering and blocking of websites, legal restrictions on content, the vibrancy and diversity of the online information domain, and the use of digital tools for civic mobilization – is the closest approximation to measuring the impact of censorship on the scale of searchable information. China’s score on this count was two out of 35 points, compared to an average score of 20.

Looking ahead, we can expect more of the same. Already, the Chinese government has been quick to issue new draft rules on chatbots. On April 11, the Cyberspace Administration of China (CAC) decreed that generative AI content must “embody core socialist values and must not contain any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.”

This underscores a vital distinction between the pre-existing censorship regime and new efforts at AI oversight. Whereas the former uses keyword filtering to block unacceptable information, the latter (as pointed out in a recent DigiChina forum) relies on a Whac-a-Mole approach to containing the rapidly changing generative processing of such information. This implies that the harder the CAC tries to control ChatGPT content, the smaller the resulting output of chatbot-generated Chinese intelligence will be – yet another constraint on the AI intellectual revolution in China.

Unsurprisingly, the early returns on China’s generative-AI efforts have been disappointing. Baidu’s Wenxin Yiyan, or “Ernie Bot” – China’s best known first-mover large language model – was recently criticized in Wired for attempting to operate in “a firewalled Internet ruled by government censorship.” Similar disappointing results have been reported for other AI language processing models in China, including Robot, Lily, and Alibaba’s Tongyi Qianwen (roughly translated as “truth from a thousand questions”).

Moreover, a recent assessment by NewsGuard – an “internet trust tool” established and maintained by a large team of respected Western journalists – found that OpenAI’s ChatGPT-3.5 generated far more false, or “hallucinogenic,” information in Chinese than it did in English.

The literary scholar Jing Tsu’s remarkable book Kingdom of Characters: The Language Revolution that Made China Modern underscores the critical role that language has played in China’s evolution since 1900. In the end, language is nothing more than a medium of information, and in her final chapter, Tsu seizes on that point to argue that “Whoever controls information controls the world.”

In the age of AI, that conclusion raises profound questions for China. Information is the raw fuel of large language AI models. But state censorship encumbers China with small language models. This distinction could well bear critically on the battle for information control and global power.


*Stephen S. Roach, a former chairman of Morgan Stanley Asia, is a faculty member at Yale University and the author of the forthcoming Accidental Conflict: America, China, and the Clash of False Narratives (Yale University Press, November 2022). Copyright: Project Syndicate, 2023, published here with permission.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

7 Comments

Sat through a 1-hour webinar yesterday with a consumer data company showcasing its new AI platform. The platform has all kinds of APIs for Google, etc and enables the user to select how much they want to spend for a media budget and the platform tells them where to allocate their spend. It has a 'predict' button that estimates what the outcomes are going to be. My thoughts were the following:

- The platform has nothing to do with Generative AI. It's slick and easy to use, but it doesn't tell me with any confidence level the appropriate 'range'. It gives me an absolute number. This is only useful for commercial people with no ability to think for themselves. 

- The same company still relies on survey data for a lot of its solutions. However, one of its clients believes that the data collection is BS and has commissioned another small agency to collect the data. 

How can you trust a black-box AI platform when you cannot rely on the same provider to conduct basic survey research professionally and accurately?   

Up
0

I think one of the problems with the rise of AI will be the decline of humans ability to think critically. A bit like calculators have reduced our abilities in mental arithmetic. Not being able to multiply in your head isn't the end of the world, but not being aware the AI isn't giving you the information you think it is might be.

Up
6

This week someone told me that the ex-CEO of Japan Airlines had been recruited for a role at a pvte airline called Bamboo. Using Brave, the summarizer (essentially generative AI) told me that the CEO resigned from Bamboo.

So I'm none the wiser.  

Up
1

Wrong. You are just as wise as ever, and you now know more. You have seen firsthand how unedited Chatbot can work!

Up
1

I think one of the problems with the rise of AI will be the decline of humans ability to think critically.

Sadly this progression seems to have already started at western universities with no AI involvement. If those attending are restricted from what they are allowed to say, our ability to think critically is lessened, which is what the free speech vs hate speech debate in tin the throws of at present. While valid to discus and debate many issues of contention, we should not be legislatively restricting views on these issues.

Up
3

Oh they are thinking about it alright, they just know they are not allowed to say what they think. Until they walk into the voting booth. I don't think this election is going to be anywhere near as close as we are being led to believe. New tv1 poll out tonight. 

Up
1

I think the Chinese will do fine using AI to assess images , make connections, analyses millions of hours of voice (cell calls etc) in real time and chat, ie they will be able to use AI to compliment their human censors.....     

Same with clearly defined process engineering...

China does not allow free google search why would they ever allow ChatGPT on the political landscape?

The CCP is a temporary blip in Chinese history, less then a human life... it may not last much longer. and it surely will fail, not many left now is there North Korea, China... perhaps Russia and a few ex soviet nations.

 

 

Up
0