Combating malign influence through ‘societal resilience’

     

 

In the twenty-first century, it is truer than ever that competitions and conflicts arise between societies, not armies, according to Michael S. Goodman, professor of intelligence and international affairs, and Filippa Lentzos, a senior research fellow in the Department of Global Health & Social Medicine and the Department of War Studies, both at King’s College London.

We agree with others who emphasize that what matters is not who wins new territories, but who wins the data, the trust, the hearts and the minds of citizens, they write for the Centre for International Governance Innovation (CIGI):

Deliberately propagating false stories is nothing new, but the speed and reach of contemporary campaigns to shape and influence opinions and actions across the globe is different, and, increasingly, what was once “propaganda” is now being considered as “strategic communication.” The distinction is important: this is a coordinated tool of the state deployed to deliberately sway behaviour. Yet as the COVID-19 crisis has highlighted, it is no longer just states that are behind these efforts.

Huge efforts are under way by national governments to safeguard the physical and virtual security of their citizenry, but there remains a weakness at the heart of society: the frailty of the individualIncreasingly, preserving and enhancing societal resilience will become the most important asset for leaders, they add

What has changed since 2016, when you were one of the earliest funders of research into disinformation and its political impact, the NED’s Dean Jackson asks Kelly Born (right), executive director of Stanford’s Cyber Policy Center:

  • One obvious thing that’s changed since 2016 is the “techlash.” The Arab Spring and the publication of my colleague Larry Diamond’s article on “Liberation Technology” were only a few years before 2016. We were still in a state of techno-utopia—recognizing the many benefits that social technologies can provide, but without yet appreciating how they could be abused by bad actors, or how the business models of these platforms might prove problematic. We are now in a dramatically different place.
  • A second change is that there is now a field of people studying disinformation, with academic centers, think tanks, listservs, and a real community of experts. We had almost none of that connective tissue before 2016.
  • We also have frameworks to organize what we’re seeing. In the early days, I wrote about what I saw as the three main points of intervention: “upstream”, working to improve the quality of journalism;  downstream, on citizen-facing efforts like fact-checking and news literacy; or (much less common at the time) mid-stream, working to improve the distribution of content by new online platforms. It is this latter area, of how information is distributed, that has changed most dramatically in the modern era, and where interventions are more readily scaled. But it needed to be better understood.
  • And we now have frameworks for thinking about how to detect and address problematic content. In the early days, the conversation was about whether content was true or false. But often, the kind of content we are most concerned about is not categorically true or false—it is heavily biased, misleading, or inflammatory. We now realize that in addition to looking at the content itself, it’s helpful to think about the actor behind the content, or the behaviors or techniques that actor is employing to amplify their content—creating fake accounts, running bot networks, or micro targeting in a discriminatory way. Camille Francoise recently summarized this as the “ABCs of disinformation—“actors, behaviors, and content.”
  • Finally, we also have a much more nuanced idea of what platforms can do about problematic content. Initially the thinking was that platforms should delete it, which of course runs into free-speech complications—especially in the United States where we have a much more absolutist view of free speech than anywhere else in the world. We now realize that in addition to deleting content, platforms can demote it, disclose the source, delay content that has reached a certain threshold of virality until it’s verified, dilute it amidst higher quality content, deter (profit motivated) actors from placing it, or offer digital literacy, etc. My colleague Nate Persily framed this as the “7 Ds of Disinformation.” RTWT

The Center for Strategic and International Studies’ Defending Democratic Institutions Project holds a virtual discussion, beginning at 1 p.m., on “Combating Malign Influence in 2020.” Speaker: Jeffrey Rosen. August 26, 2020 RSVP Andrew Schwartz, 202-775-3242; http://www.csis.org  Livestream HERE.

Print Friendly, PDF & Email