Jigsaw, the Alphabet unit that aims to make the world safer through technology, is expanding its Project Shield technology that protects against distributed denial of services attacks to European political organizations, campaigns, and candidates, FastCompany reports:
Project Shield already defends news and human rights organizations against the attacks, which overwhelm servers with bogus traffic so legitimate requests can’t get through. Since last May it has also protected participating U.S. political organizations and should be in place for European Parliament elections in May 2019…..
Alphabet isn’t the only big tech company focusing on election integrity. Facebook has taken steps to boost ad transparency and curb fake accounts and disinformation after controversies around the 2016 U.S. election, and Microsoft has its own Defending Democracy Program to protect candidates from hacking, secure election operations, and counter disinformation. Cloudflare also offers free DDOS protection to qualifying groups working on “arts, human rights, civil society, or democracy” through its Project Galileo initiative.
Jigsaw’s Perspective API is developing an evolving set of tools to combat abuse and harassment. But this machine learning technology also raises questions about the limits of AI, notes PCMag’s Rob Marvin:
Tech giants have experimented with various combinations of human moderation, AI algorithms, and filters to wade through the deluge of content flowing through their feeds each day. Jigsaw is trying to find a middle ground. The Alphabet subsidiary and tech incubator, formerly known as Google Ideas, is beginning to prove that machine learning (ML) fashioned into tools for human moderators can change the way we approach the internet’s toxicity problem.
Perspective is an API developed by Jigsaw and Google’s Counter Abuse Technology team. It uses ML to spot abuse and harassment online, and scores comments based on the perceived impact they might have on a conversation in a bid to make human moderators’ lives easier.
Based on her own interviews with ISIS defectors and jailed recruits, Jigsaw’s Yasmin Green launched the Redirect Method, a new deployment of targeted advertising and video aimed at confronting online radicalization and stopping kids like this from joining the ranks of terrorist groups, FastCompany’s Lydia Dishman reports:
Although the program is rooted in Google’s AdWords technology and curated YouTube video content, Green insists that algorithms aren’t the whole picture. When you consider the rise in the cybersuccess of ISIS, she says, their most impressive accomplishment wasn’t the tech savviness or the innovation. “It was the insight into what makes humans tick, and how to use readily available online tools and social media to exploit people based on their insecurities, prejudices, and fears,” she observes.
Online misinformation is a problem for democracies worldwide, but we should worry about how misinformation will change democracies in the developing world, according to a Council on Foreign Relations analysis:
The real danger is if regulators end up losing their patience with digital literacy initiatives and find greater willingness to employ illiberal solutions. This is no longer limited to autocratic governments, which have a willingness to leverage the issue to crack down on political dissent; increasingly democracies are testing the waters too. This has come in the form of internet shutdowns (for which India is number one in the world) or the shutting down of individuals apps (which Brazil has tried before). In the absence of effective and democratic policy remedies, the misinformation problem might lead developing countries to adopt an increasingly autocratic approach to governing.
Based on a deeply problematic business model, social-media platforms are showing the potential to exacerbate hazards that range from authoritarian privacy violations to partisan echo chambers to the spread of malign disinformation, notes Larry Diamond, a senior fellow at Stanford University’s Hoover Institution. In democracies, the deleterious political effects of social media are making themselves felt through three broad mechanisms, he writes for the NED’s Journal of Democracy.
Between news and election organizations, the Jigsaw team held various meetings that reached an estimated 10,000 people, trained hundreds of election officials in the U.S., and distributed about 5,000 security keys for two-factor authentications. Another program called Perspective is aimed at flagging online abuse in media properties. Perspective’s API has already scored more than 15 billion comments to train its machine-learning models. The Perspective research team has created the largest dataset of abusive comments.
“I would never form a hypothesis about how to address a digital threat without firsthand conversations with victims and people who were former perpetrators,” Green says. “It’s just incredible how often things like kindness and fairness and self-esteem come up as factors in either why somebody committed a threat, or a threat was effective in creating harm. And those aren’t really factors that you often hear technologists, or even policy makers, talk about.”