The emergence of an AI-powered authoritarian bloc led by China could warp the geopolitics of this century—preventing billions of people in the world from ever securing any measure of political freedom. Only China’s citizens can stop it. @andersen https://t.co/qTdyA4gH0R
— Daniel Twining (@DCTwining) July 29, 2020
Large-scale political organization could prove impossible in societies watched by pervasive automated surveillance, according to a disturbing, must-read report.
Chinese President Xi Jinping is using artificial intelligence to enhance his government’s totalitarian control—and he’s exporting this technology to regimes around the globe, notes analyst Ross Andersen. The emergence of an AI-powered authoritarian bloc led by China could warp the geopolitics of this century. It could prevent billions of people, across large swaths of the globe, from ever securing any measure of political freedom, he writes for The Atlantic.
Andersen talked through a global scenario that has begun to worry AI ethicists and China-watchers alike with the computer scientist Yi Zeng, the deputy director of the Research Center for Brain-Inspired Intelligence, who in the spring of 2019 published “The Beijing AI Principles,” a manifesto on AI’s potential to interfere with autonomy, dignity, privacy, and a host of other human values:
In this scenario, most AI researchers around the world come to recognize the technology’s risks to humanity, and develop strong norms around its use. All except for one country, which makes the right noises about AI ethics, but only as a cover. Meanwhile, this country builds turnkey national surveillance systems, and sells them to places where democracy is fragile or nonexistent. The world’s autocrats are usually felled by coups or mass protests, both of which require a baseline of political organization. But large-scale political organization could prove impossible in societies watched by pervasive automated surveillance.
Authoritarian and democratic governments alike in at least 25 countries have expanded their use of surveillance technologies in response to the pandemic, including using GPS tracking to enforce compliance, collecting cell phone data from telecom companies to gauge adherence to public health guidelines, and publicly providing what in other times would have been considered confidential information regarding those infected with Covid-19, says a new CSIS analysis.
A new commission on artificial intelligence and good governance launched by the Oxford Internet Institute will work with policymakers from around the world to advise on the most effective and principled ways of using AI, NSTech reports:
An inaugural paper outlines the commission’s four principles for government use of AI which include inclusive design, informed procurement, purposeful implementation and persistent accountability, all with the aim of protecting democracy. The commission will complete a series of reports in the coming months and aim to come up with guidelines for best practice for policymakers and government officials. It will look at health first, followed by open cities, and then military and policing applications – two areas where governments around the world are investing significant resources.
“For the last couple of years, there has been nominal interest in the role of big data and making use of data in government, but I think Covid has really turned up the pressure on government agencies to use data in effective ways,” says OII director Philip Howard, author of Lie Machines. “I think before we get too much more integration of AI and government, we need to set the rules – figure out how we can use AI to do government the way we’d want it to be done.”
How to establish democratically accountable rules and norms that harness the benefits of AI-related technologies, without infringing on fundamental rights and creating technological affordances that could facilitate authoritarian concentration of power? asks Dr. Nicholas D. Wright, an affiliated scholar at Georgetown University.
Absent these purposeful efforts, societies risk spiraling into new authoritarian forms of surveillance-based governance, he writes in “Artificial Intelligence and Democratic Norms: Meeting the Authoritarian Challenge,” the latest paper in the Sharp Power and Democratic Resilience series, from the NED’s International Forum.
Civil society around the world has a critical role to play in helping democracies resist authoritarian pressure on the global surveillance environment. Organizations focused on diverse issues including privacy, human rights, free expression, technological standards, public health, and consumer protection can help identify, explain, and collaboratively address the complex challenges that arise from AI-related technologies, such as:
1. Building and maintaining data silos. Authoritarian regimes can turbocharge AI by training it on two types of data that liberal democracies should not similarly exploit or combine: “broad data” generated at volume on digital devices, and high quality “ground truth data,” such as tax returns and medical records. While conventional wisdom says that data must be integrated rather than isolated, siloing data limits authoritarian affordances and enhances security. Civil society must consider what silos are necessary to prevent misuse of data.
2. Affording new models of “digital sovereignty” for use by liberal democracies. Authoritarian states advocate for digital sovereignty as a state-based model of control over the internet. There is a critical need to develop alternatives. Civil society can help think through new models that balance sovereignty with the protection of individual freedoms.
3. Support tech–civil society collaborations and develop resilience. Civil society, in cooperation with government and big tech corporations where possible, can aim to correct market failures—like privileging advertising and marketing tools over individual privacy—by giving citizens the means to safeguard democratic integrity against malign information operations, while preserving essential openness of the information environment.
4. Resist sharp power in international fora. Norm-setting and technical standardization of AI-related technologies happen at a global scale. Civil society should promote transparent, multi-stakeholder AI governance and develop AI standards that encourage democratic practices and individual privacy.