A better way to counter astroturfing: Disinformation, technology, and democracy in transition

     

How can you distinguish real netizens from participants in a hidden influence campaign on Twitter? It’s not easy, say analysts Franziska KellerDavid SchochSebastian Stier and JungHwan Yang. 

We examined eight hidden propaganda campaigns worldwide, comprising over 20,000 individual accounts. We looked at Russia’s interference in the 2016 presidential election, and the South Korean secret service’s attempt to influence that country’s 2012 presidential election. And we looked at further examples that are associated with Russia, China, Venezuela, Catalonia and Iran, they write for the Washington Post:

All of these were “astroturfing” campaigns — the goal is to mislead the public, giving a false impression that there is genuine grass-roots support or opposition for a particular group or policy. We found that these disinformation campaigns don’t solely rely on automated “bots” or bot accounts — contrary to popular media stories. Only a small fraction of the 20,000 accounts we reviewed are “bot accounts” that posted more than 50 tweets per day on a regular basis — a threshold some researchers use to distinguish automated accounts from bona fide individual users.

It’s not enough just to detect and take down posts that seek to discredit candidates, or to ramp up cybersecurity protections to ward off hackers. There’s a better way to tackle disinformation, according to technology company Main Street One. Campaigns and political party leaders should target users with content that directly counters false information — through its network of millions of social media influencers primed to fight online information wars, the Post’s Tonya Riley reports:

Main Street One’s technology — and philosophy — is based off U.S. efforts to combat another threat: the spread of Islamic State propaganda online. [Main Street One’s founder and chief executive Curtis] Hougland received Defense Advanced Research Projects Agency funding during the Obama administration to work on tools to analyze what kind of social media posts had the highest likelihood of undercutting terrorist propaganda so the U.S. government could deploy them. 

“There have been a lot of smart cyber minds that have been shutting the back door with firewalls, servers, two-factor authentication, system upgrades and actually doing a decent job of getting campaigns to follow some protocols to avoid some of the more disastrous moments in 2016,” says Hougland. “The challenge has been that we’ve left the front door open, and that has a lot to do with disinformation.”

Several campaigns targeted by a Russia-based operation on Facebook’s popular Instagram app said they had been unaware of the new foreign disinformation efforts until the tech giant announced them publicly last week, raising alarms that democracy remains vulnerable to foreign interference even after three years of investigations into the Kremlin’s attack on the 2016 election, the Washington Post reports.

“The Russians are repeating the same tactics they used during the 2016 election but only growing more strategic in identifying divides and capitalizing on those divides to create fault lines in society and distrust between people and institutions,” said Ali Soufan, a former longtime FBI agent who wrote a report in May for the Department of Homeland Security that warned, “To date, the United States has no national strategy to counter foreign influence.”

And experts say that increased adoption of this kind of highly targeted technology raises questions about the need for ethics in this space, the Post adds.

“As the tools of influence skyrocket, democracies have to be careful to keep questions about the ethical use of personal data front and center,” says Lindsay Gorman, a fellow at the Alliance for Securing Democracy, a bipartisan initiative that the German Marshall Fund founded to combat foreign election interference. “To that end, political actors should develop clear and transparent standards for how they access and use voter information and empower voters to protect their online presence.”

What are the reasons disinformation workers have for doing their job? How do audiences engage with disinformation they see online? Are interventions to disinformation such as fact checking and other media and literacy campaigns effective?

Elections around the world are threatened by rival states and international radical networks. This summary from MediaWell focuses on coordinated disinformation campaigns. MediaWell is a new platform that tracks and distills the latest research on disinformation, online politics, election interference, and emerging collisions between media and democracy.

The Consortium on Democracy and Disinformation is calling for research proposals on understanding disinformation and the solutions offered in relation to democracy. Together with members from media organizations, the academe, and other civil society organizations, the consortium aims to find answers to how disinformation is produced and how it affects Philippine democracy. Full details here.

What are the historical antecedents to contemporary disinformation campaigns? If we now live in a “post-truth” world, what role have domestic organizations such as think tanks and partisan political organizations played in creating it?  The Media & Democracy program at the Social Science Research Council asks.

It convened a research development workshop at George Washington University (above) featuring: – Naomi Oreskes, Professor of the History of Science, Harvard University – Paul Starr, Stuart Professor of Communications and Public Affairs & Professor of Sociology, Princeton University – Yochai Benkler, Jack N. and Lillian R. Berkman Professor for Entrepreneurial Legal Studies, Harvard University – Jane Meyer, Staff Writer, The New Yorker – Frank Sesno, Director and Professor of Media and Public Affairs and International Affairs, George Washington University.

Print Friendly, PDF & Email