‘Information statecraft’: authoritarian attack on discursive space reshaping conflict



As artificial intelligence is increasingly integrated into digital advertising, disinformation operations and legitimate political communications will gradually become concerted, automatic and seamless, argues Dipayan Ghosh, a fellow at New America and the Shorenstein Center at the Harvard Kennedy School. For students of disinformation — including the Russians who to date have not even had to leverage such sophisticated web technology to mislead American voters — the new information ecosystem presents a vast land of opportunity, he writes for the New York Times:

Even in light of the Cambridge Analytica revelations, there is time yet to act. Internet firms should aggressively work to limit disinformation on their platforms by developing algorithms — perhaps driven by A.I., as suggested by [Facebook’s Mark] Zuckerberg — that can detect disinformation and flag it for fast human review. Strong one-off actions against widespread disinformation tactics, such as Twitter’s recent move, can also help. They also must be more transparent about their algorithmic software and data practices with researchers, journalists and consumers. Further, the regulatory community must continue its aggressive review of the industry’s practices.

Russia and China are simply not going to tolerate non-state actors having so much freedom to debate the facts and meaning of their actions, and probably won’t have to, analyst TS Allen notes for Cipher Brief. The authoritarian attack on discursive space already has several names, he writes in a review of David Patrikarakos,’s War in 140 Characters: How Social Media is Reshaping Conflict in the Twenty-First Century (Basic Books, 2017):

The 2017 National Security Strategy calls it “information statecraft.” [National Endowment for Democracy analysts] Christopher Walker and Jessica Ludwig argue that Chinese and Russia influence is best described as “sharp power,” which is “not principally about attraction or even persuasion; instead, it centers on distraction and manipulation.” Whatever its name, its lines of effort are clear: aggressive efforts to control information platforms, computational propaganda to inspire support, complex influence operations to undermine opposition, and promotion of data sovereignty to guarantee states’ control over the Internet.

“If they succeed, the globalized information domain will cease to exist outside of the West — and with it, Homo digitalis will lose much of its power,” Allen concludes.

Michael X. Delli Carpini, dean of the University of Pennsylvania’s Annenberg School for Communication, believes recent shifts in political discourse should be seen not as an aberration but as the culmination of “a fundamental shift in the relationships between journalism, politics, and democracy,” argues, notes analyst Nicolas Carr:

The removal of the professional journalist as media gatekeeper released into the public square torrents of information, misinformation, and disinformation. The flood dissolved the already blurred boundaries between news and entertainment, truth and fantasy, public servant and charlatan. Drawing on a term coined years ago by the French philosopher Jean Baudrillard, Delli Carpini argues that we’ve entered a state of “hyperreality,” where media representations of events and facts feel more real than the actual events and facts.

Print Friendly, PDF & Email