Comment

That Assange Pardon Deal With Trump Dana Rohrabacher Tried to Make? It Was Facilitated by Chuck C. Johnson.

135
goddamnedfrank9/15/2017 8:49:45 pm PDT

re: #63 Charles Johnson

These search terms for advertisers are automated, I’m sure. Twitter doesn’t deliberately target them. They’re driven by database queries for common keywords. I’ve written code to do this myself.

But what this really reveals is how prevalent the Nazis and white supremacists are on Twitter, when they show up with millions of hits for potential advertisers.

Yes but it also demonstrates that there’s no adult in the room asking hard questions about the effects of automation and lack of community monitoring. It reminds me of the researchers who discovered that the AI they’d built was coming to racist and sexist conclusions because they were feeding it raw, un-curated web data:

AI can analyze data more quickly and accurately than humans, but it can also inherit our biases. To learn, it needs massive quantities of data, and the easiest way to find that data is to feed it text from the internet. But the internet contains some extremely biased language. A Stanford study found that an internet-trained AI associated stereotypically white names with positive words like “love,” and black names with negative words like “failure” and “cancer.”

Luminoso Chief Science Officer Rob Speer oversees the open-source data set ConceptNet Numberbatch, which is used as a knowledge base for AI systems. He tested one of Numberbatch’s data sources and found obvious problems with their word associations. When fed the analogy question “Man is to woman as shopkeeper is to…” the system filled in “housewife.” It similarly associated women with sewing and cosmetics.

While these associations might be appropriate for certain applications, they would cause problems in common AI tasks like evaluating job applicants. An AI doesn’t know which associations are problematic, so it would have no problem ranking a woman’s résumé lower than an identical résumé from a man. Similarly, when Speer tried building a restaurant review algorithm, it rated Mexican food lower because it had learned to associate “Mexican” with negative words like “illegal.”