How Google plans to fight toxic trolls

     

Google’s parent company, Alphabet, says it plans to apply machine learning technology to promote more civil discourse on the internet and make comment sections on sites a little less awful, The New York Times reports:

Jigsaw, a technology incubator within Alphabet, says it has developed a new tool for web publishers to identify toxic comments that can undermine a civil exchange of ideas. Starting Thursday, publishers can start applying for access to use Jigsaw’s software, called Perspective, without charge.

“We have more information and more articles than any other time in history, and yet the toxicity of the conversations that follow those articles are driving people away from the conversation,” said Jared Cohen, president of Jigsaw, formerly known as Google Ideas.

The toxicity of trolls

The company’s latest effort is called the Perspective API, Yahoo Finance adds:

Available Thursday, Feb. 23, Perspective is the result of Jigsaw’s Conversation AI project and uses Google’s machine learning technologies to provide online publishers with a tool that can automatically rank comments in their forums and comments sections based on the likelihood that they will cause someone to leave a conversation. Jigsaw refers to this as a “toxicity” ranking.

“At its core, Perspective is a tool that simply takes a comment and returns back this score from 0 to 100 based on how similar it is to things that other people have said that are toxic,” explained product manager CJ Adams.

On a demonstration website launched today, Conversation AI will now let anyone type a phrase into Perspective’s interface to instantaneously see how it rates on the “toxicity” scale, WIRED adds.

Jigsaw is quick to note that they view their current system as a first step, a combined technology and social experiment rather than a production system to be deployed this afternoon, notes one analyst:

They are releasing the system via an API service available to other organizations to experiment with it and explore how it performs in their own communities. This is itself immensely noteworthy in a world in which companies increasingly roll out filtering systems without warning and without any insight into how they function or any ability for the community to offer feedback. Jigsaw’s API prominently offers the ability to flag a score the user believes is wrong, which will eventually be fed back into the models to retrain them.

Print Friendly, PDF & Email