11 votes

San Francisco says it will use AI to reduce bias when charging people with crimes

20 comments

  1. [19]
    ClearlyAlive
    (edited )
    Link
    = Yeah no. But apart from that, I think this is quite a good move. I think this is how technology should be used, to reduce our human biases and make it easier to be just. It would be interesting...

    Wood says the new tool is basically just a lightweight web app that uses several algorithms to automatically redact a police report, recognizing words in the report using computer vision and replacing them with generic versions like Location, Officer #1, and so on.

    =

    AI

    Yeah no.

    But apart from that, I think this is quite a good move. I think this is how technology should be used, to reduce our human biases and make it easier to be just. It would be interesting if we first used this program before feeding the data into predictive policing systems to reduce the biases currently present.

    The software will even be open-sourced; ticking more boxes off my good government software criteria.

    10 votes
    1. [13]
      stu2b50
      Link Parent
      The original definition of AI is apparently just dead now, there is nothing about this tool that has anything to do with an rational agent interacting with an environment. Which is fine, I...

      The original definition of AI is apparently just dead now, there is nothing about this tool that has anything to do with an rational agent interacting with an environment. Which is fine, I suppose, but I don't even know what the new de facto definition is. Some mash of maChIne LeaRniNg (by which of course specifically neural networks, because linear regression isn't cool enough) and, uh, "algorithms".

      12 votes
      1. NaraVara
        Link Parent
        Literally just stick some IF THEN statements somewhere in a process and you're doing ARTIFICIAL INTELLIGENCE. Shoehorn a transaction ledger in there somewhere and suddenly you're on THE BLOCKCHAIN...

        Literally just stick some IF THEN statements somewhere in a process and you're doing ARTIFICIAL INTELLIGENCE.

        Shoehorn a transaction ledger in there somewhere and suddenly you're on THE BLOCKCHAIN baby!

        All you gotta do now is find a credulous techno-hypebeast with VC money to shovel your way.

        5 votes
      2. [11]
        teaearlgraycold
        Link Parent
        Today "AI" means nothing more than software that makes a decision. AGI is the proper term for what a layman might think AI is.

        Today "AI" means nothing more than software that makes a decision. AGI is the proper term for what a layman might think AI is.

        5 votes
        1. [10]
          stu2b50
          Link Parent
          What happened to the technical definition of AI? Does BFS count as "AI"? Cuz it originally did. CFR? MCTS? Minimax? And an SVM wouldn't, because it has nothing to do with agents trying to act...

          What happened to the technical definition of AI? Does BFS count as "AI"? Cuz it originally did. CFR? MCTS? Minimax?

          And an SVM wouldn't, because it has nothing to do with agents trying to act rationally in an environment. Now a neural network, even if it's just a classifier or regression model, is "AYEE EYE".

          1 vote
          1. [9]
            Amarok
            Link Parent
            The field has diversified somewhat. Take neural networks. These are more akin to muscle memory. They don't think, they simply make lightning fast comparisons and evaluations, and they don't do...

            The field has diversified somewhat.

            Take neural networks. These are more akin to muscle memory. They don't think, they simply make lightning fast comparisons and evaluations, and they don't do this in any way like humans do. Self driving cars watch humans drive for years and build up a database of human moves that they know to replay when road conditions match those where they saw a human do something. In a way, this is just selective recording and selective playback of certain kinds of input and output, often unique to each system, but always extremely focused on one simple task. The closer to binary, the easier it is. I like to think of these like VCRs, they are just hella fancy recorders. What are they good for? Reflexes, a nervous system of sorts. Rapid response to external stimuli. This is what you want for a collision detection system in your car, or tracking software that's keeping a camera pointed at a target, or even for flying drones.

            If you put a pair of these networks together, and say teach one of them what a bird looks like, and another how to draw a bird (based on giving it billions of images of birds to start with), you can get these two engines at opposite sides of the same narrow task to train each other. This is called adversarial networks, and it's all the rage lately. This is useful for something like recognizing anomalies in retinal scans better than humans do. In every case, though, this is binary simple basic tasks. I think this is the kind Zuckerberg talks about saving us from the moderation apocalypse, and it's also why I don't believe him - that task is well beyond simple.

            Then there's something like Watson, which is just an amalgam of hundreds of these algorithms all working together driven by something like a consensus protocol. Watson wins Jeopardy because it has an entire chorus of these simple checks inside of it (several thousand if memory serves), and knows how to tell when they are all converging on the same answer through different means, using the same narrow comparisons to tally the results of all of its guesses (and it does understand 'confidence' rating to some degree). It's a very fancy guessing engine. That has uses too, but it's not intelligent, not thinking. It's more like a living library, just barely smart enough to look up answers. We'd all love to have this in our pockets, and someday we will.

            These are all examples of narrow ai - narrow meaning focused on one single task, not capable of moving beyond that.

            An AGI or artificial general intelligence is what a real, thinking, learning, cognitive entity would be called. It has a sense of self, a sense of the world, some form of knowledge storage, and the ability to experiment and learn. It might even be able to modify its own code someday. Odds are when you turn it on, it's going to be at a toddler's level for a long time, until you teach it. It should be possible to start up with a lot of knowledge, so think toddler with the equivalent of a couple hundred PhDs - but no lab experience at all to back that up. There's no shortcut to wisdom, even for a machine. That only comes with time, it's what you learn from reality kicking your ass every day for years, not what you know.

            There was this notion that AI was like a human, being able to think and rationalize - and this sort of AI has no basis in reality at present. It's a fiction of mass media and scifi books/films, often used as a boogeyman or a savior depending on who is hustling who. While it may be technically possible to create one, it requires scientific definitions of cognition we simply do not have. As brain scan fidelity increases, and we study this problem, sure someday we can make progress here. That's decades down the road at best, possibly centuries, and yes, possibly never depending on some interpretations of P!=NP. Having hardware capable of running this kind of mind does not mean we can create one, or that one will magically appear out of nowhere just because we have a comfy place for it to live. We still have to steal the code from nature for how a mind works, make sense of it, and re-implement it on silicon.

            We're going to be looking at narrow ai becoming an integral part of most devices, giving everything its own tiny, simple, focused operating system. This is where 'smart' devices come in. There's usually an advantage to having smart things if you are an insurer, because the smart part helps cut down on trouble, which saves money. There's going to be real pressure to smart-ify things whenever there's risk involved, much like we've already automated air traffic control and are starting to automate driving.

            I hope that clears up some of the nomenclature confusion. This is a very general summary, there may be more types on the horizon than muscle memory, guessing engines, and basic feedback systems or spoken word interfaces out there. Right now it's mostly smoke and mirrors. It's a hard field!

            3 votes
            1. [8]
              stu2b50
              (edited )
              Link Parent
              Neural networks aren't anything like "muscle memory". Their name is a misnomer to begin with. Think about a massive nonlinear function with millions of parameters comprised of piecewise linear...

              Neural networks aren't anything like "muscle memory". Their name is a misnomer to begin with. Think about a massive nonlinear function with millions of parameters comprised of piecewise linear functions.

              You can see how that can approximate any real function. Now let's use one of the most naive optimization techniques, literally barely an improvement over hill climbing, only guaranteed to converge on a global optima with a CONVEX loss function, when used on a loss function that is so NOT CONVEX it's not even funny. And just pray it's smooth enough you'll get somewhere close.

              They don't think, they simply make lightning fast comparisons and evaluations, and they don't do this in any way like humans do.

              You know what a neural network is? It's max(w@x + b, 0) chained together. It's not magic, it's not biological, it has nothing to do with our brain. It's a bunch of linear functions made non-linear by taking the max with 0. Then you optimize it and pray it becomes useful.

              3 votes
              1. [3]
                Deimos
                (edited )
                Link Parent
                Please don't put things like "Bruh what" and "wat, just wat" in your comments. The rest of it would be just as valid and convey the same information without those lines, and it just makes you come...

                Please don't put things like "Bruh what" and "wat, just wat" in your comments. The rest of it would be just as valid and convey the same information without those lines, and it just makes you come off as condescending.

                3 votes
                1. [2]
                  stu2b50
                  Link Parent
                  I removed it. It was a knee jerk reaction because of so much of what is just, well, not right spoken with so much confidence.

                  I removed it. It was a knee jerk reaction because of so much of what is just, well, not right spoken with so much confidence.

                  4 votes
                  1. Deimos
                    Link Parent
                    Thanks, I appreciate it. We can delete these comments if you like and clean up the off-topic clutter a little.

                    Thanks, I appreciate it. We can delete these comments if you like and clean up the off-topic clutter a little.

                    2 votes
              2. [4]
                Amarok
                Link Parent
                Alright, walk me through it, I want to learn. I have read that google's cars were just watching humans and building databases of how to react from that, and that's how a neural network operates....

                Alright, walk me through it, I want to learn. I have read that google's cars were just watching humans and building databases of how to react from that, and that's how a neural network operates. Are you describing those mechanics and is that a fair layman summary of the effect?

                1 vote
                1. [3]
                  stu2b50
                  Link Parent
                  It is a very, very high level description that unfortunately has unintended implications. Think about taking in a bunch of datapoints, and trying to predict the next datapoint with a linear model....

                  It is a very, very high level description that unfortunately has unintended implications. Think about taking in a bunch of datapoints, and trying to predict the next datapoint with a linear model. Watching data and seeing how to react to it.

                  That's called linear regression and your pocket calculator does it.

                  Really, the key behind neural networks is the universal approximation theorem (https://en.wikipedia.org/wiki/Universal_approximation_theorem). I gave the intuitive idea up there; if you have a bunch of piecewise linear functions, i.e a bunch of short straight lines, you can approximate a curve, or any function with enough parameters.

                  So just have a bunch of parameters! Think of a function that takes in a picture and outputs the the coordinates of a car. That is indeed a function. Due to the universal function approximation theorem, a "neural network" can approximate this theoretical function to abritrary closeness with enough neurons.

                  But how do you calculate what the parameters should be? Well, there's no good answer. For performance reasons (backprop is linear, and so relatively fast) we can use gradient descent (as opposed to more "intelligent" optimization techniques like newton's method) and hope for the best. Note the hope for the best part, because there's zero guarantee that you won't get stuck in a local optima. And neural networks do! All the time!

                  So more feeding images in the form of 3xWxH matrices into large non-linear differentiable functions and then naively optimizing its output to be as close as possible to human labeled data than watching humans.

                  2 votes
                  1. [2]
                    Amarok
                    Link Parent
                    So every problem then is a different map, and the rules for how to play on that map have to be guessed by the programmers and refined for the game to have a useful outcome. Each map is unique only...

                    So every problem then is a different map, and the rules for how to play on that map have to be guessed by the programmers and refined for the game to have a useful outcome. Each map is unique only to one problem space because we just can't model much larger than that yet. Fancier tools like Watson have lots of maps in them all trying to solve the same problem in their own way, with another map on top of that trying to tie it all together into a converging answer. Is that a more accurate way to get the idea across to a layman?

                    1 vote
                    1. stu2b50
                      Link Parent
                      Take the context of self driving cars. One of the things you may want is a function that detects obstacles in camera footage. A "traditional" approach would be to use things like edge detection...

                      Take the context of self driving cars. One of the things you may want is a function that detects obstacles in camera footage. A "traditional" approach would be to use things like edge detection and other heuristics to hard code an algorithm to get bounding boxes out of images. But that's pretty hard.

                      Now, one step further into "machine learning". Instead of trying to explicitly write out an algorithm, you can have data, with labels, and you can try to build a model that represents the data. If you have new data, the model would (hopefully) be representative of the true distribution of the data.

                      Let's use a linear model. W@X + b (where @ is the dot product). You have a vectors of weights that just look like a list of numbers [w1, w2, w3... w_n]. You take those weights, multiply them by their respective pixel [p_1, p_2... p_n], then add them together, then add b, which is just another list.

                      There are optimization techniques like gradient descent that can make W and b such that the output of w@x + b is as close to the labeled data as possible. Note this is the perceptron algorithm.


                      Now, that's probably going to be an awful model, because position of objects isn't linear with respect to the pixels. We can make a more descriptive model by first making our function non-linear with the relu function (i.e max(x, 0)), then stick a bunch of them layered together. Now this model isn't just a straight line.

                      And we can do the same procedure, and hopefully it gets close.

                      2 votes
    2. [5]
      krg
      Link Parent
      I'm trying to imagine a scenario where this reduction of bias is taken to its furthest extent. A judge sitting in a room alone with a screen that shows text arguments from plaintiff and text...

      I'm trying to imagine a scenario where this reduction of bias is taken to its furthest extent. A judge sitting in a room alone with a screen that shows text arguments from plaintiff and text arguments from defendant, who are also sitting in separate rooms maybe with their attorneys in a separate room so that their style of prosecution or defense isn't informed by who their client is. Hell, maybe the arguments would have to be in the form of pure logic to take away any bias against language. In this case, the judge would basically be a mathematician that determines whether or not a proof is valid. All happening in a closed-circuit system.

      Maybe some sci-fi writer has already proposed something similar

      3 votes
      1. [4]
        NaraVara
        Link Parent
        There's a short story from the '50s or something about that. A guy tried to get a refund from a book store for a copy of "Treasure Island" he never got and, through a sequence of automated systems...

        There's a short story from the '50s or something about that. A guy tried to get a refund from a book store for a copy of "Treasure Island" he never got and, through a sequence of automated systems misunderstanding each other, ends up being executed for piracy and kidnapping.

        I wish I could remember what it was called.

        6 votes
        1. [3]
          mat
          Link Parent
          Wow, that's an amazingly unsearchable series of keywords, unless you're looking for bookshops' refund policy on Treasure Island. It's not helped by the fact that Stevenson also wrote a book called...

          Wow, that's an amazingly unsearchable series of keywords, unless you're looking for bookshops' refund policy on Treasure Island. It's not helped by the fact that Stevenson also wrote a book called Kidnapped so adding that into any search is arguably worse than not including it.

          4 votes
          1. [2]
            NaraVara
            Link Parent
            After a long and arduous quest, I found it! It's called Computers Don't Argue. I was off on a few of the details too, but the thing that helped me zero in was that I remembered someone mentions a...
            • Exemplary

            After a long and arduous quest, I found it!

            It's called Computers Don't Argue.

            I was off on a few of the details too, but the thing that helped me zero in was that I remembered someone mentions a punch card somewhere in the story.

            7 votes
            1. mat
              (edited )
              Link Parent
              Oh, nice find! I casually googled around for an hour last night while watching the cricket but got nowhere. Thanks for the link, I shall read that later. edit: Read over lunch, really good! Also a...

              Oh, nice find! I casually googled around for an hour last night while watching the cricket but got nowhere.

              Thanks for the link, I shall read that later.

              edit: Read over lunch, really good! Also a bit too believable..

  2. Micycle_the_Bichael
    Link
    I'm not holding my breath for this to be good. It seems less prone to some of the issues with AI and policing in the past (which the author links to at the bottom of the article), but I don't...

    I'm not holding my breath for this to be good. It seems less prone to some of the issues with AI and policing in the past (which the author links to at the bottom of the article), but I don't really see this changing much of anything unfortunately. The shortcomings of the project that I can think of are all listed in the article.

    4 votes