The Global Struggle Over AI Surveillance: From digitalization to dystopia?

     

Just as with doctors’ medical practice, purveyors of new technologies should commit to a ‘do no harm’ code of ethics, USAID Administrator Samantha Power said in a keynote speech today.

It might at least assuage some of the growing anxiety over the implications of artificial intelligence discussed in The Global Struggle Over AI Surveillance: Emerging Trends and Democratic Responses, a new report from the International Forum for Democratic Studies.

“From cameras that identify the faces of passersby to algorithms that keep tabs on public sentiment online, artificial intelligence (AI)-powered tools are opening new frontiers in state surveillance around the world,” states the report, edited by the Forum’s Beth Kerley, which addresses both the democracy implications of new technologies and vectors for civil society involvement in their design, deployment, and operation:

  • AI surveillance systems such as facial recognition cameras, “smart city” projects, predictive policing software, and social media monitoring tools are expanding government surveillance powers in ways that create new and serious risks to privacy and the rule of law.
  • These tools are spreading rapidly, with PRC vendors and those based in democracies both contributing to the growing global AI surveillance marketplace.
  • AI surveillance applications at their most dystopian can be seen in closed autocracies, above all the People’s Republic of China. But surveillance risks extend across regime types.
  • In “swing states”—countries that mix autocratic and democratic tendencies—new surveillance powers threaten to tilt the playing field further toward illiberal governments.
  • While democratic governments and international institutions are beginning to tackle AI governance questions, there is crucial work to do in moving from abstract principles to practical implementation. This will require greater democratic coordination, voluntary steps by the private sector, deeper commitment to government transparency, and active engagement at all stages with civil society and the broader public.
  • Through coalition building and innovative research methods, a number of enterprising organizations have already started the work of challenging opaque surveillance deals and creating the conditions for a more open and informed democratic debate on surveillance practices.

To address the challenge of AI surveillance, democracies need to undertake several major tasks simultaneously, Carnegie Endowment senior fellow Steven Feldstein observes in the lead essay:

  • First, they must define regulatory norms to guide responsible AI use, whether through national AI strategies and legislation or through regional efforts.
  • To ensure that this norm-setting occurs democratically and reflects the concerns of affected groups, citizens must have more opportunities to be involved in the deliberation process.
  • Finally, democratic governments need to form coalitions of like-minded states to advance shared digital values.

Through this combination of strategies, democracies can prepare themselves to promulgate standards globally that will embed AI in human rights and rule of law safeguards, keep abuses in check, and counter authoritarian ambitions to set the rules of the game, Feldstein concludes.

Two case studies provide more granular depictions of how civil society can influence this norm-shaping process, the report adds.

Asociación por los Derechos Civiles

Eduardo Ferreyra of Argentina’s Asociación por los Derechos Civiles discusses strategies for overcoming common obstacles to research and debate on surveillance systems:

  • Create coalitions with other CSOs: In the face of stonewalling by public officials and company representatives, organizations working on surveillance technology should be in touch with each other to obtain information, share contacts, and distribute research tasks.
  • Work closely with like-minded journalists: Independent media can be a great asset in shedding light on surveillance deals, increasing public awareness, and fostering debate by questioning simplistic narratives around surveillance tech….
  • Engage international actors: Due to public image worries, governments may pay more attention to rights issues when they are raised by international advocacy groups or through global or regional human rights bodies. …
  • Highlight concrete concerns around surveillance systems: Companies and politicians push surveillance as the answer to crime—regardless of whether the evidence supports this view…

When Serbia’s interior minister and police director announced plans to install 1,000 high-tech cameras from People’s Republic of China (PRC) tech giant Huawei, their statement crystalized worries that had been growing among members of our team since we first heard about vague proposals to “upgrade” traffic cameras in the city, adds Danilo Krivokapić of Serbia’s SHARE Foundation. Over the following two years, SHARE Foundation reframed the discussion around this project in an effort that mobilized tech enthusiasts, local residents, media outlets, and the broader European digital rights community. In mid-2021,

We discovered that the Interior Ministry had opened a little noticed “public” debate on a proposed new police law, which was just about to close. The proposal would have introduced legal grounds for mass biometric surveillance. Upon learning of this effort, we were able to obtain reactions from members of the EU Parliament as well as global and regional human rights organizations. Local media coverage was extensive. In two days, the disputed proposal was pulled back.

While the digital transformation of public security is an unavoidable part of the future, Krivokapić writes, it is up to citizens, human rights defenders, and the power of civic engagement to make sure that digitalization does not lead to dystopia. RTWT

Print Friendly, PDF & Email