17
votes
California abolishes cash bail, replacing with algorithmic based risk assessment
Link information
This data is scraped automatically and may be incorrect.
- Title
- California abolishes money bail with a landmark law. But some reformers think it creates new problems.
- Word count
- 1057 words
What could possibly go wrong?
Is this algorithm going to be biased like a lot of others. A few examples:
A soap dispenser wouldn't dispense soap because it could not recognize non-Caucasians as humans
A "racist" google beauty prediction AI
and so on?
How to mitigate this bias?
Don't use opaque algorithms. If you're just running data through a neural net you can't understand what's going on. A human should be able to understand and rationalize any decision made by the algorithm. You still need testing and strict review of procedure, but there should be less surprises. Better yet, when something goes wrong you'll probably understand why.
This was a good article from the EFF about the subject: Math Can’t Solve Everything: Questions We Need To Be Asking Before Deciding an Algorithm is the Answer
Hire more diverse programmers, actually test your code with all types of people.
I can go into a long winded discussion about machine learning algorithms, and biased testing, but the general gist is when a bunch of white & asian people (who are the majority of programmers) develop the machine learning datasets and testing sets for a program (likely reflecting their internal or visible biases) then you get algorithms that reflect the biases of their creators.
testing with such large groups is notoriously difficult though. I mean, there's an example of certain types of medicine that went through all FDA phases of trial (and in the end that means thousands upon thousands of people) but then was released and found to have very serious adverse effects in certain chinese people. I think it was painkillers, causing serious heart damage in a small subsection (one in thousands) of a chinese subgroup.
To be able to test that you've gotta use millions of people. It's not serious for almost anyone, but then it's in production and you find it kills certain people.
Obviously AI-based recognition isn't gonna be that harmful. Until applied to real world stuff like this, where even the makers have no idea what biases or thought patterns are implicit in the software. And it isn't just mislabeling someone, it could be structurally discriminating against whoever, without anyone knowing, or even being able to know. Not even if you test with larger groups, because the effect might only be done on a small enough subsection that even thorough testing wouldn't show anything happening.
Better training data is the only way, right? I imagine that in this case they could use historical data to test and train their algorithms, while hopefully leaving off markers like race.
I think the real risk is that racial (or importantly, economic) bias is inherent in the other inputs that these risk assessments use. For instance, the propublica study linked above explicitly states that race was not a consideration in the Northpointe risk assessment algorithm:
However, there are a number of questions that could lead to biases:
(see the whole list of questions here)
Are all questions I could see being biased against certain minorities or economic groups. To take a simple example, it wouldn't be unreasonable to me if people who actually deal with hunger on a recurring basis were more likely to answer that hungry people have the right to steal food.
Well at some point, the algo is going to have to make decisions based on something, and if you follow the subsets up, eventually they are likely to lead to a larger sets which are likely to lead to racial and socioeconomic divisions. I mean that's just how this all works, right? I am not defending this, just stating what seems to be reality.
One thing though, a survey seems like a great way to get compromised data into the system.
Who is going to answer that question, posed by law enforcement, in an honest way if they believe that "yes, a hungry person has a right to steal?" I don't understand why the "user" has any personal input. People lie.
What do you think? Should algorithms be used in situations like this, or any other for that matter?
edit: And what parameters and signals are humans using in this case currently? Is a somewhat racist algo better than humans if it shown to be at least a bit less racist and better at predicting bail jumpers?
Edit2: spelling
I tried to be rather cautious in my examples here -- clearly this is the crux of the matter, how do you account for the multitude of ways that racial & socioeconomic status influence the input data into a system like this such that:
a) it remains useful, and
b) it is not perpetuating disadvantages on already marginalized groups
You should take a look at the EFF article @Deimos posted, it goes way deeper into what sorts of data should be used, and the ethics of building such a system.
Yeah, I have to say I don't really understand why they'd ask people questions like this. Some of the information (e.g., the question about parents going to jail) is probably pulled automatically but I don't see the point of posing basic philosophy questions to possible felons other than to get like, the 0.00001% not smart enough to lie about their belief's.
It's possible (though, in my opinion unlikely) that there is a way that a for-profit corporation to create this type of algorithm that satisfies (say) the EFF's ethical criteria, but:
does not instill much confidence that the current players are attempting to do so.
I agree with that completely. This is the biggest problem I see with privately owned algorithms that make decisions in the public forum. There is zero accountability, the logic is “proprietary” and cannot be shared. IMO, this is why any algorithms adopted by the government should be open source.
We need to come up with a law regarding this... looking at you EU, you’re our only hope.
If the algorithm has been trained on a huge dataset of inmates, and is incredibly accurate in predicting recidivism rates using a test set of historical cases, then absolutely. This seems easy enough to answer. Just compare the results to the accuracy of the humans.
Ahh, thanks. Didn't catch that one yesterday.
From a utilitarian perspective you could consider this change as a way to simply reduce the harm done by interim incarceration. Instead of the process being "You're poor? You're in jail." (and I assume poorer people are much more likely to be criminally charged) it should be based on something else.
People will get fucked over by the system either way. But we can probably do better than just jailing all of the poor people.