The problem with all of this is that these algorithms all use correlation based metrics rather than causation. And regardless of people's feelings about the matter, race DOES correlate to things -...
The problem with all of this is that these algorithms all use correlation based metrics rather than causation. And regardless of people's feelings about the matter, race DOES correlate to things - which is why these algorithms end up being racist. The framing therefore shouldn't be about 'fixing' the algorithms - they aren't broken, they're doing precisely what they are designed to do.
And it's not even a case of these algorithms reacting to race directly at all. Some of the earlier ones did, people said 'that's racist', and so the racial component was removed. Except... it wasn't really removed at all, because race correlates to a whooooole bunch of other things like socioeconomic status, education, health, you name it. So any algorithm that is based on a assessment of anything that correlates to race is inherently going to show a correlation to race, because that's how correlation works.
There are 2 ways that I know of to change (not 'fix') this - in each case it's about the data you feed in to the algorithm rather than the algorithm itself so much. The first is to fix the fact that society and reality is racist. Welp, that's obviously a non-starter, for oh so many reasons. But if the data that was fed into these things didn't carry a racist reality with it, the algorithm wouldn't spit out a racist result. The second way is to manually adjust the outcomes to account for a racist reality (otherwise known as affirmative action). The problem there is that looks hella racist, and it would be a fight every step of the way.
Getting rid of the algorithms and just having people do it is worse - why? Because reality is racist - and the people within that reality even more so. Every 'guidance' you give a person is going to carry that same embedded racism along with it. A judge could list all the reasons they think a person will reoffend (and be correct) and that is going to correlate to race.
IMO the best way to do it would be to try and identify better metrics, so that even though reoffending is going to correlate to race, and the outcomes of those algorithms will be racist, we accept that in favour of more effective outcomes - and pair that with an effort to use those outcomes to identify areas where reality is racist (i.e. education levels) and attack those directly. That would need certain powers that be to want to create change rather than lip service though, so I give it about a 1/10000000000 chance of happening.
I kinda outlined a third change you could make: Allow the algorithm to be explicitly racist, but give it the data that was used to show the algorithm is racist. We know they're twice as likely to...
I kinda outlined a third change you could make: Allow the algorithm to be explicitly racist, but give it the data that was used to show the algorithm is racist. We know they're twice as likely to be false-positived, we apparently have a way to study the ground truth. If we provide that to the algorithm too, the algorithm would learn "given matching criminal backgrounds, the black person is less likely to reoffend". That's not racist, that's reflected in the ground truth, it's just that our more available metrics (criminal background) don't reflect that the white person is more likely to be more criminal.
It's basically mostly equivalent to your affirmative action. The key difference is we do it all in one fell swoop; the prediction and the adjustment. The result is that it's harder to criticize the adjustment as racist if all we did was aim for the ground truth.
Should we try? Yeah, I don't see that as a problem. Should we be damn careful whenever we're tinkering with governmental powers (this case) or people's lives (cars, doctors)? Yup. I mean, this...
Should we try? Yeah, I don't see that as a problem.
Should we be damn careful whenever we're tinkering with governmental powers (this case) or people's lives (cars, doctors)? Yup. I mean, this system has potential to negate a lot of the pent-up-from-positive-feedback "energy" in crime statistics, or at least provide the context to see it as that. However: Anyone interpreting these systems should know enough to interpret them. A score popping out of a "garbage-out-garbage-in" system isn't something you can just use without statistical knowledge. And we should also take tremendous care when constructing these systems and even more so when feeding them with data. As I said in another comment, letting the system know the race (and a corrected recidivism rate) might be a benefit by allowing it to correct its view on the data accordingly. This isn't impossible to get right, but imo studies were needed to show its effectiveness before a serious rollout.
(In fact, with the right data it should be relatively easy to get right, so I'm wondering how the system in question got built in the first place. My hypotheses are incompetence, malice, obliviousness to the problem of racist LE, corporate greed (we know it sucks, ship it anyway). Idunno.)
i mean, the answer is probably not? but the reason the article asks the question it does is because there are jurisdictions which are already relying in part or in whole on algorithmic...
i mean, the answer is probably not? but the reason the article asks the question it does is because there are jurisdictions which are already relying in part or in whole on algorithmic decision-making in aspects of criminal proceedings--the cat's basically out of the bag at this point and realistically, we're probably not going to be able to stuff it back in.
Two of the most important points here: The risk-classification is not just proportional with actual crime rates; it's worse. The racism lies in the underlying data - if the conviction rates (based...
As far as the data is concerned, critics of these tools argue, it’s racism in, racism out.
The reporters found that black defendants were almost twice as likely as white defendants to be “false positives,” labeled high risk when they did not go on to commit another crime.
Two of the most important points here: The risk-classification is not just proportional with actual crime rates; it's worse. The racism lies in the underlying data - if the conviction rates (based on actual crimes committed) are racist and therefore criminal records are racist, a tool crunching that into a number can only be racist. Maybe if we let it actually peek at the race of the defendant and take extremely good care to give it accurate training data on reoffending cases, the tool would have a chance to correct that problem.
But that would expose racism in conviction rates. Nice try.
The problem with all of this is that these algorithms all use correlation based metrics rather than causation. And regardless of people's feelings about the matter, race DOES correlate to things - which is why these algorithms end up being racist. The framing therefore shouldn't be about 'fixing' the algorithms - they aren't broken, they're doing precisely what they are designed to do.
And it's not even a case of these algorithms reacting to race directly at all. Some of the earlier ones did, people said 'that's racist', and so the racial component was removed. Except... it wasn't really removed at all, because race correlates to a whooooole bunch of other things like socioeconomic status, education, health, you name it. So any algorithm that is based on a assessment of anything that correlates to race is inherently going to show a correlation to race, because that's how correlation works.
There are 2 ways that I know of to change (not 'fix') this - in each case it's about the data you feed in to the algorithm rather than the algorithm itself so much. The first is to fix the fact that society and reality is racist. Welp, that's obviously a non-starter, for oh so many reasons. But if the data that was fed into these things didn't carry a racist reality with it, the algorithm wouldn't spit out a racist result. The second way is to manually adjust the outcomes to account for a racist reality (otherwise known as affirmative action). The problem there is that looks hella racist, and it would be a fight every step of the way.
Getting rid of the algorithms and just having people do it is worse - why? Because reality is racist - and the people within that reality even more so. Every 'guidance' you give a person is going to carry that same embedded racism along with it. A judge could list all the reasons they think a person will reoffend (and be correct) and that is going to correlate to race.
IMO the best way to do it would be to try and identify better metrics, so that even though reoffending is going to correlate to race, and the outcomes of those algorithms will be racist, we accept that in favour of more effective outcomes - and pair that with an effort to use those outcomes to identify areas where reality is racist (i.e. education levels) and attack those directly. That would need certain powers that be to want to create change rather than lip service though, so I give it about a 1/10000000000 chance of happening.
I kinda outlined a third change you could make: Allow the algorithm to be explicitly racist, but give it the data that was used to show the algorithm is racist. We know they're twice as likely to be false-positived, we apparently have a way to study the ground truth. If we provide that to the algorithm too, the algorithm would learn "given matching criminal backgrounds, the black person is less likely to reoffend". That's not racist, that's reflected in the ground truth, it's just that our more available metrics (criminal background) don't reflect that the white person is more likely to be more criminal.
It's basically mostly equivalent to your affirmative action. The key difference is we do it all in one fell swoop; the prediction and the adjustment. The result is that it's harder to criticize the adjustment as racist if all we did was aim for the ground truth.
Maybe we should go back to the meta-question: should we even try to automate or computerize certain things at all?
Should we try? Yeah, I don't see that as a problem.
Should we be damn careful whenever we're tinkering with governmental powers (this case) or people's lives (cars, doctors)? Yup. I mean, this system has potential to negate a lot of the pent-up-from-positive-feedback "energy" in crime statistics, or at least provide the context to see it as that. However: Anyone interpreting these systems should know enough to interpret them. A score popping out of a "garbage-out-garbage-in" system isn't something you can just use without statistical knowledge. And we should also take tremendous care when constructing these systems and even more so when feeding them with data. As I said in another comment, letting the system know the race (and a corrected recidivism rate) might be a benefit by allowing it to correct its view on the data accordingly. This isn't impossible to get right, but imo studies were needed to show its effectiveness before a serious rollout.
(In fact, with the right data it should be relatively easy to get right, so I'm wondering how the system in question got built in the first place. My hypotheses are incompetence, malice, obliviousness to the problem of racist LE, corporate greed (we know it sucks, ship it anyway). Idunno.)
i mean, the answer is probably not? but the reason the article asks the question it does is because there are jurisdictions which are already relying in part or in whole on algorithmic decision-making in aspects of criminal proceedings--the cat's basically out of the bag at this point and realistically, we're probably not going to be able to stuff it back in.
Two of the most important points here: The risk-classification is not just proportional with actual crime rates; it's worse. The racism lies in the underlying data - if the conviction rates (based on actual crimes committed) are racist and therefore criminal records are racist, a tool crunching that into a number can only be racist. Maybe if we let it actually peek at the race of the defendant and take extremely good care to give it accurate training data on reoffending cases, the tool would have a chance to correct that problem.
But that would expose racism in conviction rates. Nice try.