This content is locked. Please login or become a member.
Risks
Predictive analytics, also, you would call it predictive AI or enterprise machine learning projects. That’s the use of machine learning to learn how to predict per individual in order to inform individual decisions. But there are potential downsides. There are potential social justice consequences. These predictive models could be considered sort of a small law or policy that makes decisions about which individuals gain access to certain resources, such as housing, credit, the awareness of credit opportunities because of the way marketing is targeted, or even their freedom because these models are also used to make decisions by parole officers and judges as far as basically how long people remain in prison after they’ve been convicted.
Now in general, the idea is that it predicts better than humans. There are mistakes, and the costly mistakes, we’ll call them a false positive or a false alarm. It says, hey. Look. This person’s at high risk of being prosecuted again if released from jail, but, actually, they don’t deserve that. Even if it makes mistakes less often than human decision-makers, the social justice question comes when it makes these costly errors more often for one subgroup versus another. That’s often referred to as machine bias, And there’s a famous ProPublica article which talks about how, for a certain crime prediction model that’s actually been deployed, it makes these false positives, these costly errors, twice as often for black male convicts than for white male convicts. So that difference in false alarm rate or false positive rate, that’s a kind of bias that comes not intentionally and not because the system actually explicitly considers race or any other protected class, but as a sort of side effect of the state of the world today, right, which, of course, has been influenced by historical injustices.
Opportunities
So my opinion about that is the only way to rectify it is, in fact, this can be seen as a glass-half-full opportunity. We now have put a spotlight by quantifying that injustice and how it manifests in these systems, and we have this infrastructure we’re deploying. We’ve, we’re using the models. It’s making predictions. They’re being used as points of consideration by human judges and parole boards. Let’s inform those judges and parole boards and/or change the system a bit to explicitly adjust in order to equalize those false positive rates and establish policies around the use of predictive AI and have our eye on improving things, not just putting up our hands and sort of explaining it away or defending it.
Mitigation Strategies
So when you’re looking at a project and trying to make sure that you’ve got visibility and your eyes on the big picture, the social context, the potential social ramifications of what it means to deploy this model, maybe improving efficiencies by a certain metric, but, on the other hand, potentially risking some social injustice. There’s a quote by Cathy O’Neil who says, for all these projects, start by asking: For whom will this fail? And that just seems like such an obvious question to ask, but it’s an important reminder because take it from me, when you’re in your cubicle doing the number crunching, it’s extremely hard not to see the trees for the forest. It’s very hard to kind of keep your mind on the big picture and context rather than just become very focused on the cool science and improving for some particular metric. If you’re only improving for one metric, there will be dire ramifications and fallout. It’s absolutely critical that we formalize and incorporate into these projects metrics that pertain to social justice, not just to bottom-line profit.