Generative adversarial networks: how fake news fuels authoritarians

     

The erosion of democratic norms in the advanced liberal democracies has given autocratic leaders the green light to do the same, reports suggest.

“Fake news is being used as a weapon, a blunt instrument to silence dissent in places like Malaysia, Thailand, Cambodia. And then on top of that, China is increasingly supporting authoritarian regimes,” says Griffith University’s Lee Morgenbesser.

“The region has never had a lot of democracy, but this is all happening at the same time and that’s why the picture looks bad,” he tells the Sydney Morning Herald.

The recent revelations that Cambridge Analytica used the Facebook data of 50 million Americans, taken without their permission, to help them micro-target voters during the 2016 presidential election, has set off a firestorm, Luke Barnes writes for ThinkProgress. But the roots of the problem extend far deeper than one country and one election. Cambridge Analytica honed its techniques in a host of countries – like Kenya – with political institutions that are younger, more fragile, and far more vulnerable to interference.

“The problem that you have in Kenya, as in a lot of these countries, is that there are these very deep underlying socio-economic tensions and ethnic divisions,” says Patrick Merloe, (right), director of electoral programs at the National Democratic Institute [a core institute of the National Endowment for Democracy].

The fight against disinformation is part of a global battle to sway minds and gives dictators to restrict freedom of expression, notes Andrés Ortega, senior research fellow at the Elcano Royal Institute, a major Spanish foreign affairs think tank:

One need not go as far as China to confirm this. As Yarik Turianskyi of the South African Institute of International Affairs points out, at least 10 African countries – Burundi, Cameroon, Chad, the Democratic Republic of Congo, Ethiopia, Gabon, Gambia, Mali, Uganda and Zimbabwe – closed social media websites and/or messaging applications during or after elections in the wake of protests in 2016.

The Malaysian government’s proposed legislation to curb disinformation would define as fake news “any news, information, data and reports which are wholly or partly false, whether in the form of features, visuals or audio recordings or in any other form capable of suggesting words or ideas,” the Washington Post adds:

It would cover those who create, offer, circulate, print or publish fake news or publications containing fake news, and impose a 10-year jail term, a fine of up to $128,000, or both, at the whim of the government. The law would apply to those overseas as well as inside Malaysia. A fact sheet outlining hypothetical examples includes anyone who knowingly offers false information to a blogger, as well as cases that seem to encompass acts of slander or false advertising.

But, the Post notes, the proposal looks more like a tool of arbitrary government control and intimidation. Singapore is holding hearings on a similar scheme. Other closed systems, such as China, long ago perfected the art. It is called censorship,” the Post concludes.

Generative adversarial networks

Fake news is bad enough already, but something much nastier is just around the corner, argues Jesse Lempel, an editor of the Harvard Law Review:

As Evelyn Douek explained, the “next frontier” of fake news will feature machine-learning software that can cheaply produce convincing audio or video of almost anyone saying or doing just about anything. These may be “digital avatars” built from generative adversarial networks (GANs), or they may rely on simpler face-swapping technology to create “deep fakes.” The effect is the same: fake videos that look frighteningly real.

Victims of deep fakes may successfully bring “right of publicity” claims against online platforms, thereby forcing the platforms to systematically police such content, but such a response is no panacea, he writes for Lawfare.

 

Print Friendly, PDF & Email