A.I. could help democracy – and threaten China’s digital authoritarianism


China’s ruling Communist Party managed to stifle the democratizing impact of the Internet, but can it curb the liberating potential of AI?

Five months after ChatGPT set off an investment frenzy over artificial intelligence, Beijing is moving to rein in China’s chatbots, a show of the government’s resolve to keep tight regulatory control over technology that could define an era, The NYT’s reports: 

The current A.I. wave presents new risks for the Communist Party, said Matt Sheehan, an expert on Chinese A.I. and a fellow at the Carnegie Endowment for International Peace. The unpredictability of chatbots, which will make statements that are nonsensical or false — what A.I. researchers call hallucination — runs counter to the party’s obsession with managing what is said online, Mr. Sheehan said. 

“Generative artificial intelligence put into tension two of the top goals of the party: the control of information and leadership in artificial intelligence,” he added.

In China, the U.S. bot and the artificial intelligence that makes it work represent a threat to the country’s political system and global ambitions. This is because chatbots such as ChatGPT revel in information—something the Chinese state insists on controlling, analyst Michael Schumann adds. Chatbots are also potentially more difficult to censor than earlier forms of digital media, he writes for The Atlantic:

Chatbot models will analyze, collate, and connect data in unexpected and surprising ways. “The best analogy would be to how a human learns,” Jeffrey Ding, a political scientist at George Washington University who studies Chinese technology, explained to me. “Even if you are learning things from only a censored set of books, the interactions between all those different books you are reading might produce either flawed information or politically sensitive information.”

Information and ideology are closely intertwined in China’s autocratic model of governance, according to a recent book.

Jeremy Wallace’s Seeking Truth and Hiding Facts: Information, Ideology, and Authoritarianism in China notes that from 1976, when Mao Zedong died, until 2012, when Chinese President Xi Jinping took power, Beijing promoted economic growth by rewarding local leaders for their performance mainly on three metrics: GDP, fiscal revenue, and investment, former NED board member Andrew Nathan writes for Foreign Affairs:

This strategy worked to goose the economy (even though the data were commonly exaggerated), but it also led to a surge in undesirable factors that the government did not weigh heavily in personnel evaluations, such as corruption, pollution, local government debt, and income inequality. Xi has tried to rein in these negative externalities by imposing additional performance measures on local cadres. The more problems the Chinese Communist Party has faced, the more numbers it has collected, and the more untrustworthy statistics it has introduced

“The Chinese government is very torn” on chatbots, Carnegie’s Sheehan told Schumann. “Ideological control, information control, is one of, if not the, top priority of the Chinese government. But they’ve also made leadership in AI and other emerging technologies a top priority.” Chatbots, he said, are “where these two things start to come into conflict.”

It is imperative to ensure that “democracies… lead the norms and standards around AI,” said Dr. Jason Matheny, president and CEO of RAND Corporation and commissioner of the National Security Commission on Artificial Intelligence, in testimony to a subcommittee of the Senate Armed Services Committee

“I think it would be very difficult to broker an international agreement to hit ‘pause’ on AI development in a way that would actually be verifiable,” he said. “I think that would be close to impossible.”

To build assistive A.I. for democracy, we would need to capture human feedback for specific democratic use cases, such as moderating a polarized policy discussion, explaining the nuance of a legal proposal, or articulating one’s perspective within a larger debate, argue analysts Bruce Schneier, Henry Farrell and Nathan E. Sanders.  This gives us a path to “align” large language models (LLMs) with our democratic values, they write for Slate: by having models generate answers to questions, make mistakes, and learn from the responses of human users, without having these mistakes damage users and the public arena.

A recent report from the National Endowment for Democracy’s International Forum (above) discusses how to establish democratically accountable rules and norms that harness the benefits of artificial intelligence-related technologies, without infringing on fundamental rights and creating technological affordances that could facilitate authoritarian concentration of power.

Aspen Institute

The idea that China may act as the lead guide when it comes to AI ethics ought to terrify Western governments, The Economist adds:

China’s experience with the internet is informative. It has steadfastly opposed the notion of the web as a place of freedom and openness. When governments gather to discuss online regulation, China consistently sides with Russia and other tramplers of free speech. Mr Clinton was naive to think the Communist Party could not pound the internet into submission. It would be naive for Western leaders to think it cannot do the same with AI.

China’s AI development is highly concerning to the United States; some in the United States view the countries’ competition as a contest between democracy and authoritarianism, the Harvard International Review’s Kate Bigley observes. Artificial intelligence can greatly bolster economic and military power and, thereby, political ascendancy.

Elements of China’s authoritarian state make AI more susceptible to usages viewed by the West as serious violations of international law, she adds. For example, China’s limited data protection allows for access to a large amount of data. AI-driven data collection could greatly contribute to China’s nascent social credit system, which was published in November 2022 as a draft law but remains stagnant on a large scale.

Print Friendly, PDF & Email