Countering digital dangers to democracy

     

In terms of digital dangers to democracy, four challenges loom large, notes Laura Chinchilla, Chair of former UN Secretary-General Kofi Annan‘s Commission on Elections and Democracy in the Digital Age, launched earlier this month at Stanford University:

  • The first is the rise of an election interference industry, she writes for Project Syndicate. Just as we study the 2016 US presidential election for lessons on how to prevent interference, others look to that campaign for insights into electoral manipulation. Commercial consulting groups already appeal to potential clients with ideas about how social media, fake news, and micro-targeting can be effective in swaying elections. ….
  • Another emerging challenge comes from increasingly popular “home assistants.” Online information monopolies already have the power to determine what much of a country’s population sees and believes. And as home assistants such as Google Home, Alexa, and Siri become more commonplace, users will soon get single-answer responses to queries, instead of multiple suggestions…..
  • The third threat is the emergence of fake video material – the so-called deepfakes. These use artificial intelligence and image synthesis to create video images that are indistinguishable from authentic footage. Imagine, for example, the speed at which manufactured footage of the Iranian president telling his military chiefs to prepare an invasion of Israel would spread around the Internet. …

  • Last, and by no means least, are encrypted peer-to-peer platforms. WhatsApp, with more than 1.5 billion monthly active users in 180 countries, has been used to spread rumors and stoke violence in Brazil, Mexico, and India in the same way that Facebook was used to stir up communal violence in Sri Lanka, Myanmar, and Bangladesh.

At a minimum, governments need to fund the development of media forensic techniques for detecting deepfakes, notes Charlotte Stanton, the inaugural director of the Silicon Valley office of the Carnegie Endowment for International Peace. There is currently an arms race between automated techniques that create deepfakes and forensic techniques that can detect them. In the United States, the Defense Advanced Research Projects Agency (DARPA) is investing in forensic detection techniques. It’s critical that such investments continue, if not increase, to keep up with the pace of new deepfake algorithms, she writes.

The unutterable truth is that only two decades ago, the digital information age was widely believed to represent the West’s greatest competitive strategic advantage, adds Zac Rogers, a Senior Research Associate at the Centre for United States and Asia Policy Studies, at Flinders University of South Australia. For communities outside the traditional national security sphere, understanding the definition and demarcation of their roles and responsibilities in relation to societal well-being will also require change, he writes for Strategy Bridge:

As it stands, too many view the current circumstances as an opportunity for the exploitation and predation of a rudderless polity and an increasingly vulnerable population. Mind sets, laws, practices, and mandates will require fundamental change. Whether or not liberalism survives this historical moment as a viable project at the heart of western civilization, and what might replace it in that role, are fundamentally unknowable. That these challenges are real and are presented here as impossibilities is intended to provoke, unsettle, and to urge their more full and serious examination.

Information and communications technologies are more widespread than electricity, reaching three billion of the world’s seven billion people. Their roots run through the necessities of daily life, mediating nearly every form of social participation, notes Shoshana Zuboff, the Charles Edward Wilson Professor Emerita at Harvard Business School. “The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power.” But surveillance capitalism diverges from many norms and practices that define the history of capitalism, especially the history of market democracy, she writes for the Financial Times:

Surveillance capitalists produce deeply anti-democratic asymmetries of knowledge and the power that accrues to knowledge. They know everything about us, while their operations are designed to be unknowable to us. They predict our futures and configure our behaviour, but for the sake of others’ goals and financial gain. This power to know and modify human behaviour is unprecedented.  Often confused with “totalitarianism” and feared as Big Brother, it is a new species of modern power that I call “instrumentarianism”. Instrumentarian power can know and modify the behaviour of individuals, groups and populations in the service of surveillance capital. 

Based on a deeply problematic business model, social-media platforms are showing the potential to exacerbate hazards that range from authoritarian privacy violations to partisan echo chambers to the spread of malign disinformation, notes Larry Diamond, a senior fellow at Stanford University’s Hoover Institution. In democracies, the deleterious political effects of social media are making themselves felt through three broad mechanisms, he writes for the NED’s Journal of Democracy.

Zuboff’s grand thesis can be contested on many levels, the FT’s John Thornhill notes:

She largely ignores the positive side of our technological revolution. She almost certainly understates the competitive dynamics of the market. And she portrays the young as helpless saps, who use their phones 157 times a day, even as they appear to be becoming ever more savvy about and sceptical of technology. Zuboff’s analysis of power is also debatable. After all, it is a strange kind of power that can be wiped off a phone in a matter of seconds. Besides, the people who theoretically wield power appear to have little interest in exercising it, beyond enriching themselves and their shareholders. They have no grand project for humanity other than vague notions of doing good.

 

But her conclusions are surely right on at least two fronts, Thornhill adds:

  • First, she attacks technology’s ideology of inevitabilism (even though that tends to undermine her case about its growing omniscience). Just because technology has turned out the way it has does not mean that was the only way the internet could have evolved. Its development was contingent on circumstance and influenced by individuals. Societies can still choose to use technology in different ways so long as they can mobilise sufficient action.
  • Second, we should be constantly wary of those offering sweeping technological solutions to human problems. Such “solutionist” remedies are invariably worse than the original disease.

RTWT

 

Print Friendly, PDF & Email