Once heralded as vehicles for promoting democratic values abroad, social media platforms now serve as vectors for homegrown and foreign disinformation, notes analyst Matthew O’Shaughnessy. By dictating the information consumed by hundreds of millions of Americans, the machine learning (ML) algorithms employed by these platforms are an integral part of the spread of disinformation. Moreover, by improving and automating the generation and targeting of disinformation, emerging ML capabilities have the potential to significantly enhance the effectiveness of disinformation campaigns, he writes for Cipher Brief:
Capability 1: Precision targeting of individuals. Effective disinformation exploits weaknesses in the methods humans use to synthesize complex and contradictory information. Psychological research has shown that these so-called heuristics cause individuals to ascribe undue credibility to messages that are simple, negative, and repeated, and to subconsciously prefer information that conforms to existing beliefs or reinforces identity-based group membership. For propaganda campaigns to exploit these heuristics, however, their content must be precisely targeted, directing the “right” messages to the “right” people…..
Capability 2: Automatically generated propaganda. Today, most disinformation is human-generated, with groups such as Russia’s Internet Research Agency and China’s “50-cent army” manually creating and disseminating disinformation based on loose guidelines. These types of operations require a large workforce knowledgeable about foreign language and culture, limiting the potential scope of personalized disinformation. Emerging ML capabilities could overcome this limitation by automating the creation of disinformation. Recent advances in natural language processing have developed powerful systems capable of generating fabricated news articles that humans have difficulty distinguishing from real ones….
Capability 3: Selective exposure and personalized content. On many platforms, ML algorithms dictate what information is presented to users. Algorithms deployed to maximize user engagement often do so by selecting content that has many of the traits of effective disinformation: simplistic and negative, appealing to emotion over fact, and matching pre-existing beliefs. Even in the absence of automatic content selection, platforms that encourage interaction with exclusively like-minded users can diminish societal shared truth and create fertile ground for disinformation campaigns….RTWT
“The COVID-19 pandemic threatens more than the lives and the livelihoods of people throughout the world. It is also a political crisis that threatens the future of liberal democracy,” says a Call to Defend Democracy recently signed by more than 500 political and civil leaders, Nobel Laureates, and pro-democracy institutions.
Building on this statement, the World Movement for Democracy and International IDEA will host a conversation with experts and democracy activists to develop arguments supporting democracy that will appeal to people across regions, cultures, and generations. Please join the online global conversation, entitled A Battle of Narratives: Building Public Support for Democratic Renewal, on the International Day of Democracy – Tuesday, September 15, from 9:30 a.m. – 11:00 a.m. (EDT). It will address the following questions:
- How do we understand the appeal of anti-democratic narratives? What are actors and factors that influence such narratives?
- Instead of playing defense, how can we more effectively make the case for democracy and rebuild the public’s confidence in democracy?
- In what way can we leverage expertise and pedagogies of other sectors, such as creative industries, tech companies, and educational institutions?
A Battle of Narratives: Building Public Support for Democratic Renewal
Featuring: Anne Applebaum (USA) Staff Writer, The Atlantic & Author of Twilight of Democracy; Larry Diamond (USA) Senior Fellow, Hoover Institution and Freeman Spogli Institute for International Studies, Stanford University; Garry Kasparov (Russia/USA) Chairman, Human Rights Foundation & Former World Chess Champion; Bobi Wine (Uganda) Politician, activist, and singer; Annouchka Wijeshinghe (Sri Lanka) Research Coordinator, Alliance Development Trust; Omaid Sharifi (Afghanistan) President & Curator, ArtLords; Cynthia Mbamalu (Nigeria) Director of Programs, YIAGA Africa.
Moderated by: Ana Gomes (Portugal) Former Member of European Parliament. With remarks by: Jose Ramos-Horta, Chairperson, World Movement for Democracy, Former President of Timor-Leste, & 1996 Nobel Peace Laureate. Carl Gershman, President, National Endowment for Democracy; Kevin Casas-Zamora, Secretary-General, International IDEA, Former Second Vice President & Minister of National Planning of Costa Rica. Register HERE to receive Zoom details
Authoritarian actors threaten to undermine democratic ideals around the world. That’s why this month, the National Democratic Institute featured a conversation to discuss Disinformation, Cybersecurity, and Defending Digital Democracy as part of the International Leaders Forum held on the sidelines of the Democratic National Convention (and this year held virtually.)
In a wide ranging discussion, one common theme emerged. We know what works: the strongest counter to disinformation is good governance and trust in institutions. NDI has been at the forefront of technology–and how it is transforming how we engage in public discourse–for more than two decades. Along the way, NDI has developed partnerships with experts and industry leaders including Katherine Maher, CEO and Executive Director of Wikimedia Foundation and Brad Smith, CEO of Microsoft (below). They kicked off a discussion with Congresswoman Elissa Slotkin; Jeh Johnson, former Secretary of Homeland Security; and Lie Machines author Philip Howard of the Oxford Internet Institute, of Microsoft’s Defending Democracy Program pillars: protecting political campaigns from hacking, preserving the electoral process, and defending against disinformation. RTWT