Tuesday, December 13, 2016

Prosocial AI: Google's Jigsaw at the NYT

From the MIT Technology Review.

"Jigsaw — formerly known as Google Ideas — says it intends to spot and remove digital harassment with an automated program called Conversation AI.


Conversation AI is an offshoot of one of the most successful of Google’s “moonshot” projects, Google Brain. It has helped revolutionize the field of machine learning through large-scale neural networks, and given Google advantages such as software that is more skillful than humans at recognizing images.

But Conversation AI won’t be able to defeat online abuse. Though Jigsaw’s stated goal is to “fight the rise of online mobs,” the program itself is a far more modest—and therefore more plausible—project. Conversation AI will primarily streamline the community moderation that is today performed by humans. So even if it is unable to neutralize the worst behavior online, it might foster more and better discourse on some sites.

Jigsaw is starting Conversation AI at the New York Times, where it will be rolled out in a few months to help the company manage its online comments. Human moderators currently review nearly every comment published on the site. Right now, Conversation AI is reading 18 million of them, learning to detect each individual category of comments that get rejected—insubstantial, off-topic, spam, incoherent, inflammatory, obscene, attack on commenter, attack on author, attack on publisher.

The Times’s goal is not necessarily to reduce abuse in its comments, a problem it already considers under control. Instead, it hopes to reduce the human moderators’ workload. “We don’t ever expect to have a system that’s fully automated,” Erica Greene, engineering manager of the New York Times community team, told me. Times community editor Bassey Etim estimates that somewhere between 50 and 80 percent of comments could eventually be auto-­moderated, freeing up employees to devote their efforts to creating more compelling content from the paper’s comment sections.

The New York Times site poses very different problems from the real-time free-for-all of Twitter and Reddit. And given the limitations of machine learning — as it exists today — Conversation AI cannot possibly fight abuse in the Internet’s wide-open spaces. For all the dazzling achievements of machine learning, it still hasn’t cracked human language, where patterns like the ones it can find in Go or images prove diabolically elusive.

The linguistic problem in abuse detection is context. Conversation AI’s comment analysis doesn’t model the entire flow of a discussion; it matches individual comments against learned models of what constitute good or bad comments. For example, comments on the New York Times site might be deemed acceptable if they tend to include common words, phrases, and other features.

But Greene says Google’s system frequently flagged comments on articles about Donald Trump as abusive because they quoted him using words that would get a comment rejected if they came from a reader. For these sorts of articles, the Times will simply turn off automatic moderation."

Let's be clear: this is a good idea. The machine learning system is replacing grunt work currently carried out by hordes of bored human moderators. Hard cases are referred up by the AI. Few people want their reading experience polluted by foul-mouthed, ignorant rants. Moderation is enforced on this publishing outlet too! We all use automated spam filters.

And yet .. .

We sometimes forget the downside of prosociality. All this politeness, agreeableness and consensus-seeking must lead eventually to a bland, stifling conformity. When the system is silted up - captured by vested interests spouting the empty language of universality - the only progress is through breaking the rules.

I checked with my mentor, Karl M., and he confirmed that the result is often rather impolite.


Anyway, what's done is done. I can only conclude this post with a satiric observation:
"I for one welcome our new AI over---"
[This comment has been removed by the Blogger AI moderator: code 9 - hate speech].

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.