Conversation AI will use machine-learning techniques to pick up on “harassment” and “abuse” faster than any human moderator possibly can. According to Google, the filter has a 92% success rate at detecting so-called “abusive” messages in concurrence with a panel of humans, producing a false-positive only 10% of the time.Even the Web has feelings?
The major issue with such a tool is the possibly unintended consequences that automatic detection will create.
Writing in Wired, Andy Greenberg highlighted that “throwing out well-intentioned speech that resembles harassment could be a blow to exactly the open civil society Jigsaw has vowed to protect.” When discussing the potential for “collateral damage” with its inventors, co-creator Lucas Dixon argued that the team wanted to “let communities have the discussions they want to have… there are already plenty of nasty places on the Internet.”
“What we can do is create places where people can have better conversations,” Dixon claimed, which Greenberg noted was “[favoring] a sanitized Internet over a freewheeling one.”
Tuesday, September 20, 2016
No More Nastiness?
Can the Internet really be a "nicer" place?
This is not the plan they thought it was:
The regulators are still at it: Using the "altFEC" twitter account, one of several "alt" sites set up by government work...
Were they the ancestors of piano players? The brain circuits that led to two-sided tools and weapons such as hand-axes and cleavers are the ...
They really are after everyone's job: The study found that 42 percent of UK consumers believe their job is likely to be replaced by a ro...