Technology Is Biased Too. How Do We Fix It?
AI can be just as powerful in promoting bias as anything else. We still have mythical views of computers despite a whole internet of evidence to the contrary. “Computers don’t make mistakes”. As we realize this is untrue of our computers we just project it onto the next presumed better computers of the future. The future of computing is AI.
This is a serious wake up call.
FiveThirtyEight
Whether it’s done consciously or subconsciously, racial discrimination continues to have a serious, measurable impact on the choices our society makes about criminal justice, law enforcement, hiring and financial lending. It might be tempting, then, to feel encouraged as more and more companies and government agencies turn to seemingly dispassionate technologies for help with some of these complicated decisions, which are often influenced by bias. Rather than relying on human judgment alone, organizations are increasingly asking algorithms to weigh in on questions that have profound social ramifications, like whether to recruit someone for a job, give them a loan, identify them as a suspect in a crime, send them to prison or grant them parole.But an increasing body of research and criticism suggests that algorithms and artificial intelligence aren’t necessarily a panacea for ending prejudice, and they can have disproportionate impacts on groups that are already socially disadvantaged, particularly people of color. Instead of offering a workaround for human biases, the tools we designed to help us predict the future may be dooming us to repeat the past by replicating and even amplifying societal inequalities that already exist…
…Consider COMPAS, a widely used algorithm that assesses whether defendants and convicts are likely to commit crimes in the future. The risk scores it generates are used throughout the criminal justice system to help make sentencing, bail and parole decisions.
At first glance, COMPAS appears fair: White and black defendants given higher risk scores tended to reoffend at roughly the same rate. But an analysis by ProPublica found that, when you examine the types of mistakes the system made, black defendants were almost twice as likely to be mislabeled as likely to reoffend — and potentially treated more harshly by the criminal justice system as a result. On the other hand, white defendants who committed a new crime in the two years after their COMPAS assessment were twice as likely as black defendants to have been mislabeled as low-risk. (COMPAS developer Northpointe — which recently rebranded as Equivant — issued a rebuttal in response to the ProPublica analysis; ProPublica, in turn, issued a counter-rebuttal.)