How to stop China exporting AI-driven digital authoritarianism


United States senator is pushing to ban countries including China from an influential US government accuracy test of facial recognition technology, potentially opening up a new front in the escalating tech war between Washington and Beijing, The South China Morning Post reports:

Democratic Senator Brian Schatz of Hawaii has proposed the “End Support of Digital Authoritarianism Act” to bar companies from China, North Korea, Russia, Iran and other countries that consistently violate “internationally recognised human rights” from the Face Recognition Vendor Test (FRVT), which is widely considered the gold standard for determining the reliability of facial recognition software. The results of the FRVT are regularly cited by firms as a measure of their credibility, and are referred to by businesses and policymakers when buying facial recognition technology.

The bill reflected bipartisan concern in Washington “about the spread of such technologies around the world, which further promotes authoritarian rule and weakens protections for the privacy and civil rights of people around the world,” said Timothy Heath, a researcher at the Rand Corporation.

Artificial intelligence, like any technology, is politically neutral: It can be a tool for promoting democracy or a means by which dictators suppress people. But the spread of China’s AI-powered system of public surveillance does more than threaten human rights in many parts of the world; it could create serious geopolitical challenges for those countries that remain committed to the liberal order, Nikkei Asian Review analyst  HIROYUKI AKITA reports.

Some economists have warned the expansion of artificial intelligence could have a significant impact on society – including the loss of jobs due to automation – in what is sometimes called the “fourth industrial revolution”. Academics have also raised concerns about the potential for malicious use in cyber warfare and the subverting of democracy, the BBC adds.

AI systems are likely to undercut fundamental premises and infrastructures of modern life, potentially jeopardizing the physical safety of people, the integrity of democratic governance and culture, and expectations of economic opportunity, privacy and fairness, among other social values, according to a new report.

Aspen Institute

There are highly attractive breakthroughs that AI could deliver to humankind in terms of healthcare, scientific research and discovery, productivity, business innovation and wealth-creation, David Bollier writes in Artificial Intelligence and The Good Society: The Search for New Metrics, Governance and Philosophical Perspective. But there are also likely to be many complicated negative impacts—on employment, social inequality, democratic processes and possibly national security.

The report recommends “adopting new consensus metrics to assess AI and … establishing new
governance mechanisms that can provide a greater measure of public accountability over the design and uses of the technologies.”

“The challenge amounts to something of a koan, however: Can a technology that is inherently disruptive be made socially responsive, too?”

There will be no universal solution—AI itself is too diverse and rapidly evolving—but clearly new modes of anticipating and controlling the unintended and/or catastrophic dimensions of AI are needed, Bollier adds in the Report on the Third Annual Aspen Institute Roundtable on Artificial Intelligence.

Aspen Institute

AI has the potential to empower dictatorships, the report notes, citing Yuval Noah Harari’s Atlantic article, “Why Technology Favors Tyranny.”

“We tend to think about the conflict between democracy and dictatorship as a conflict between two different ethical systems, but it is actually a conflict between two different data-processing systems,” he observed:

  • Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given 20th century technology, it was inefficient to concentrate too much information and power in one place….
  • However, artificial intelligence may soon swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. In fact, it might make centralized systems far more efficient than diffuse systems, because machine learning works better when the machine has more information to analyze.

Russia and China are already laying the groundwork for a digital authoritarian future, according to Michael Shoebridge, director of the defence and strategy program at the Australian Strategic Policy Institute.

Both Xi and Putin have made public statements about the national power—strategic and economic—that will come to the states that dominate key future technologies, especially artificial intelligence and communications technologies. For example, from Putin: ‘If someone can have a monopoly in the field of artificial intelligence, then the consequences are clear to all of us—they will rule the world’. 

The overall message from Xi and Putin is clear: digital authoritarianism depends on partnering with firms that enable it—and Huawei fits the bill for the Russian and Chinese states, he writes for The Strategist.

Democracies have been failing to counter China’s push to dominate cyberspace and AI, some observers suggest.

It is imperative that artificial intelligence evolve in ways that respect human rights, say analysts Eileen Donahoe and Megan MacDuffee Metzger. Happily, standards found in landmark UN documents can help with the task of making AI serve rather than subjugate human beings, they write for the NED’s Journal of Democracy.

AI-powered systems generate modified content and advance digital forgeries that are harmful to American interests, said Alliance for Securing Democracy Non-Resident Senior Fellow Clint Watts testifying before the House Permanent Select Committee on Intelligence.

According to a report by Freedom House, a U.S. human rights group, at least 18 countries are building AI-based mass-surveillance systems with China’s help, including Zimbabwe and Uzbekistan. Unfortunately, there is no magic bullet for countering Beijing’s strategy. But there are at least two things the world’s democracies can do in response, Nikkei’s Akita adds:

  • The U.S., Japan and Australia have barred Chinese telecommunications equipment maker Huawei Technologies from their 5G networks, citing security concerns. These governments are concerned that China will use Huawei as a tool for espionage….The U.S., Japan and Australia should do more to share their knowledge and concerns about the Chinese company with other countries to alert them to the risk. Even authoritarian countries should be worried about the possibility that China may have easy access to their sensitive information.
  • The second thing the U.S., Japan, Australia and the European Union should do is work in tandem to quickly draw up international rules for the use of digital technologies and the internet. The rules should be designed to build a “common digital space” shared by like-minded countries, one that limits government intervention and monitoring.

UN Rapporteur on Freedom of Speech and UC Irvine professor David Kaye’s bookSpeech Police: The Global Struggle to Govern the Internet—details how tech company leaders have claimed they want human rights law to govern how their platforms are run – but implementation is a different matter.

Two months ago an independent High-level Expert Group on AI, set up by the European Commission, issued ethics guidelines for trustworthy AI, the Goethe-Institut adds. A few weeks later, OECD, the Paris-based organization of developed countries, issued its own set of AI principles which include the recommendation that “AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity.”

The abstract nature of this commitment to moral and legal values seems to align with the U.S. government’s February 2019 declaration emphasizing a “flexible, light-touch policy environment to encourage AI innovation.” The question remains whether the European policy approach to AI will likewise be aligned with that of the U.S. or whether the two approaches will diverge in a way that mirrors the initial handling of the privacy concerns twenty years ago.

The Goethe-Institut Washington and the Friends of the Aspen Germany are pleased to invite you to a discussion on: Ethical Principles and Guidelines for Artificial Intelligence: European and American Perspectives 

Welcome Remarks:

Lena Jöhnk, Director of Cultural Programs, North America, Goethe-Institut


Thomas Metzinger, Professor, University of Mainz; Member of the European Commission’s High-Level Expert Group on AI

Anupam Chander, Professor of Law at Georgetown University Law Center and an expert in the global regulation of new technologies 

Moderator: Kim Larsen, Board member of the Friends of Aspen Germany, Washington, DC and a Principal at Bressler, Amery & Ross

June 28, 2019 from 9:00 a.m. until 10:30 a.m.
Registration and Coffee at 8:30 a.m.

Dentons LLP
1900 K St NW, Washington, DC 20006
Washington, DC 20036

RSVP by Monday, June 24


Print Friendly, PDF & Email