42 votes

Concerns about new facial recognition software implemented by TSA at US airports

24 comments

  1. [11]
    Nox_bee
    (edited )
    Link
    On the one hand, I'm deeply unhappy that the TSA exists and would like to see the entire agency disbanded. On the other, I went through this particular system recently and didn't find it all that...

    On the one hand, I'm deeply unhappy that the TSA exists and would like to see the entire agency disbanded.

    On the other, I went through this particular system recently and didn't find it all that bothersome. Replacing the fallible human method of "yep, that looked like you" with a more systematic matching program is more consistent and more fair.

    I could raise concerns about my picture being stored, but again that's a moot point because as soon as I'm in the airport I'm already under camera and I'm presenting a document with my photo on it. So that particular battle is already lost.

    The bias that might be introduced by a program seems like a small fraction of the bias that every human would introduce based on if they're irritable, tired, or just bad at reading faces.

    So what is there to object to? I would certainly like to see an audit of this program that proves no photos are being stored permanently - as the posters promise - but even if that were true, it still doesn't mean much when I remember they've got airport camera footage every time I get to my terminal.

    25 votes
    1. [3]
      tealblue
      Link Parent
      The view that the computer is less biased is part of what makes it potentially worrisome. It's quite hard to argue for your innocence when you're up against a closed-source algorithm that is...

      The view that the computer is less biased is part of what makes it potentially worrisome. It's quite hard to argue for your innocence when you're up against a closed-source algorithm that is perceived to be free of human error.

      37 votes
      1. Nox_bee
        Link Parent
        Hmmm, good point. I hadn't thought of that. Fortunately the TSA checkpoints are still staffed with people and I doubt that will change any time soon - but that's a particularly horrifying...

        Hmmm, good point. I hadn't thought of that.

        Fortunately the TSA checkpoints are still staffed with people and I doubt that will change any time soon - but that's a particularly horrifying possibility in all this automation, I agree.

        3 votes
      2. Asinine
        Link Parent
        Therein lies the issue: that perception should never exist.

        perceived to be free of human error

        Therein lies the issue: that perception should never exist.

        1 vote
    2. [5]
      boxer_dogs_dance
      Link Parent
      I am far from expert, but I believe part of the concern is from proportionally fewer ethnic minorities in training datasets leading to the machine being disproportionately inaccurate re members of...

      I am far from expert, but I believe part of the concern is from proportionally fewer ethnic minorities in training datasets leading to the machine being disproportionately inaccurate re members of minority groups.

      There are similar concerns re women's health care because of a historic lack of women participating in drug trials. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4800017/

      I share your opinion of the TSA. Watching that whole system being rolled out so quickly in response to 911 2001 changed my perspective about politics forever. The system and its supporters were ready, waiting for an emergency in order to implement it. I miss meeting people at the gate as they exit the plane, among other things.

      29 votes
      1. [4]
        Nox_bee
        Link Parent
        From what I've heard, some of it stems from the fact that there's less contrast information to be gained from a dark face. It's not a racial bias issue, but an issue of camera contrast throttling....

        From what I've heard, some of it stems from the fact that there's less contrast information to be gained from a dark face.

        It's not a racial bias issue, but an issue of camera contrast throttling. A good solution would be something like the Walgreens passport photo area, where everyone is guaranteed to get the same consistent lighting and a photo taken with HDR cameras.

        8 votes
        1. cfabbro
          (edited )
          Link Parent
          Do you have a source for that? Because from everything I have read about the issue, it's not cameras or lighting that causes the problems, it's the training sets and developers lacking diversity...

          From what I've heard, some of it stems from the fact that there's less contrast information to be gained from a dark face.

          Do you have a source for that? Because from everything I have read about the issue, it's not cameras or lighting that causes the problems, it's the training sets and developers lacking diversity that causes it:
          https://www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/

          For companies, creating reliable facial recognition software begins with balanced representation among designers. In the U.S. most software developers are white men. Research shows the software is much better at identifying members of the programmer’s race. Experts attribute such findings largely to engineers’ unconscious transmittal of “own-race bias” into algorithms.

          Own-race bias creeps in as designers unconsciously focus on facial features familiar to them. The resulting algorithm is mainly tested on people of their race. As such many U.S.-made algorithms “learn” by looking at more white faces, which fails to help them recognize people of other races.

          Using diverse training sets can help reduce bias in FRT performance. Algorithms learn to compare images by training with a set of photos. Disproportionate representation of white males in training images produces skewed algorithms because Black people are overrepresented in mugshot databases and other image repositories commonly used by law enforcement. Consequently AI is more likely to mark Black faces as criminal, leading to the targeting and arresting of innocent Black people.

          But that is not the only issue with them. Even if a facial-recognition algorithm itself wasn't biased the results of using them likely still would be. From the same article:

          First, the concentration of police resources in many Black neighborhoods already results in disproportionate contact between Black residents and officers. With this backdrop, communities served by FRT-assisted police are more vulnerable to enforcement disparities, as the trustworthiness of algorithm-aided decisions is jeopardized by the demands and time constraints of police work, combined with an almost blind faith in AI that minimizes user discretion in decision-making.

          Police typically use this technology in three ways: in-field queries to identify stopped or arrested persons, searches of video footage or real-time scans of people passing surveillance cameras. The police upload an image, and in a matter of seconds the software compares the image to numerous photos to generate a lineup of potential suspects.

          Enforcement decisions ultimately lie with officers. However, people often believe that AI is infallible and don’t question the results. On top of this using automated tools is much easier than making comparisons with the naked eye.

          AI-powered law enforcement aids also psychologically distance police officers from citizens. This removal from the decision-making process allows officers to separate themselves from their actions. Users also sometimes selectively follow computer-generated guidance, favoring advice that matches stereotypes, including those about Black criminality.

          There’s no solid evidence that FRT improves crime control. Nonetheless, officials appear willing to tolerate these racialized biases as cities struggle to curb crime. This leaves people vulnerable to encroachments on their rights.

          The time for blind acceptance of this technology has passed. Software companies and law enforcement must take immediate steps towards reducing the harms of this technology.

          8 votes
        2. elfpie
          Link Parent
          The racial bias argument comes from the fact that you already thought of a good solution without working on the project. Or the fact the system can be ready for deployment with flaws that affect a...

          The racial bias argument comes from the fact that you already thought of a good solution without working on the project. Or the fact the system can be ready for deployment with flaws that affect a group more than others.

          2 votes
        3. theoreticallyme
          Link Parent
          My friend worked on solving racial bias problems in face recognition for a big tech company. From what he’s told me, it’s a training data problem. Tech companies first scanned who was convenient...

          My friend worked on solving racial bias problems in face recognition for a big tech company. From what he’s told me, it’s a training data problem. Tech companies first scanned who was convenient and that meant more light faces than dark and encoded tech hiring biases into the system. The solution was to go and capture face data in places like Africa to remove training biases.

    3. JackA
      Link Parent
      The problem is that it makes the already invasive ID checks convenient and unconscious. As with every part of our surveillance state that any agency rolls out, this is just more normalization of...

      The problem is that it makes the already invasive ID checks convenient and unconscious. As with every part of our surveillance state that any agency rolls out, this is just more normalization of our lack of privacy.

      It's never long before the technology is refined and rolled out into our everyday lives (usually via the police) justified by some shallow "public safety" measure.

      Face recognition on the highways to catch bad guys and save kidnapped kids. At the hospital to pull up medical records to save lives. Schools for god knows what, it never actually matters. Public sector vendors eventually branch off and make a commercial product, and pretty soon it's impossible to sneak into a concert without a ticket, stores pull your shopping data as you walk through, and your every physical move gets added to your advertising ID.

      There's a huge difference between showing up in a massive CCTV backup and having your name tagged and searchable to that footage and all other footage in one database.

      Those "we'll delete all the footage" claims don't mean anything either if they just pull a log that says "so and so was here at this time stamp" and then delete the raw footage immediately after. The tracking of me is the invasive part, not just the recording.

      20 votes
    4. chiliedogg
      Link Parent
      The issue is the consequences of the bias. The human with a stronger bias is problematic, but an AI with a weaker bias connected with a database of thousands of similar-looking people is much more...

      The issue is the consequences of the bias. The human with a stronger bias is problematic, but an AI with a weaker bias connected with a database of thousands of similar-looking people is much more likely to falsely identify someone.

      3 votes
  2. [5]
    chocobean
    Link
    This is huge. If a human agent pulls you over because they're suspicious, there is still some kind of accountability where the agent will have to answer why they are suspicious. If an agent pulls...

    the technology [...] could boost discrimination against already marginalized communities. [...] citing research showing that Asian and African American people are up to 100 times more likely to be misidentified than white men.

    Is this really happening?

    Yes. There have been some pretty high-profile cases, including several in which Black men were wrongfully arrested. (In one case in Detroit, a man was handcuffed in front of his young daughters and taken to jail after a facial recognition system incorrectly matched his driver’s license photo with a still image from a security video of a shoplifting incident).

    Some studies have shown that facial recognition software misidentifies women of color more than one-third of the time. Other studies have shown it doesn’t work as well for women, children or the elderly. [...] those who identify as agender, genderqueer or nonbinary were mischaracterized 100% of the time.


    This is huge.

    If a human agent pulls you over because they're suspicious, there is still some kind of accountability where the agent will have to answer why they are suspicious. If an agent pulls over 1000 Asian women for no reason, there's at least some perceived accountability. If a machine dings 1000 Asian women in a row we are just going to have agents shrug and say sorry my hands are tied.

    AI isn't magic. It works on training data and it means bias is everything. Imagine if someone of your description commits an atrocity and now you are machine flagged every time you drive and every time you travel. A human cop in your area might recognize you after the 10th stop and wave you past. A human cop using a scanner will have to say, sorry dude you know the drill.

    9 votes
    1. [4]
      Casocial
      Link Parent
      In this scenario, shouldn't the goal be to improve upon the algorithm's training sets instead of scrapping the idea altogether? As the adage goes, garbage in garbage out. I also wish the article...

      In this scenario, shouldn't the goal be to improve upon the algorithm's training sets instead of scrapping the idea altogether? As the adage goes, garbage in garbage out.

      I also wish the article would have brought up the likelihood of misidentification when using human recognition vs. an algorithmic one. Flawed as the algorithm is, it might actually be an improvement on the status quo.

      Accountability only ends when people let it. If action can be taken against a biased human agent, the same applies for an flawed algorithm.

      1 vote
      1. [2]
        chocobean
        Link Parent
        Where are they going to get good training sets? Are they going to pay us humans fairly to tag ourselves, or are they going to steal our data and have Clickworkers identify us for pennies? My point...

        Where are they going to get good training sets? Are they going to pay us humans fairly to tag ourselves, or are they going to steal our data and have Clickworkers identify us for pennies?

        My point is that even if humans make more mistakes, humans are able to bear the responsibility for their biases, whereas there's no blame for a machine. Machine dings are going to be be used as "probable cause" and be perceived as bias free and more accurate, when in fact their biases will be invisible and untraceable.

        Accountability ends when people let it, and this is precisely one of those fronts we need to fight. Can you imagine if we allow the use of poorly trained medical diagnostic machines that get surgery wrong on women and minorities 100 times worse? Then why the heck should we allow flawed machines into service before they have proven to be ready?

        11 votes
        1. Casocial
          (edited )
          Link Parent
          By no means am I saying that the facial recognition system should be deployed in its current state. Nor am I saying it would be simple to get unbiased training sets. If utilizing an algorithm can...

          By no means am I saying that the facial recognition system should be deployed in its current state. Nor am I saying it would be simple to get unbiased training sets. If utilizing an algorithm can make the ID process more efficient, then surely some of the manpower can be redirected towards improving it.

          Given that research exists to point out bias in the algorithm mentioned in this article, there's clearly a method to track and review output. By definition then it is not invisible and untraceable. The perception that an algorithm is infallible isn't the fault of an algorithm, but a misconception by those employing it.

          It may be clearer who to point fingers at when a human exhibits bias compared to an algorithm, but the end goal isn't to know who to blame. Reducing the amount of bias is. The software cited in this article might not be ready for use, but that doesn't mean the use of software for passenger ID is inherently flawed.

          2 votes
      2. vord
        Link Parent
        Nah better to throw the whole thing out. The TSA has always been an affront to liberty, and proven largely an ineffective nuisance.

        Nah better to throw the whole thing out. The TSA has always been an affront to liberty, and proven largely an ineffective nuisance.

        4 votes
  3. [3]
    specwill
    Link
    Remember when the TSA was like, "Let's expose a bunch of people to x-rays without considering the health implications from scanners that don't even do their job?" This is an agency that has...

    Remember when the TSA was like, "Let's expose a bunch of people to x-rays without considering the health implications from scanners that don't even do their job?"

    This is an agency that has accrued far more failures than successes trying to justify its budget and existence, conducting another experiment on the public regardless of its effects, effectiveness, or broader implications.

    6 votes
    1. [2]
      pridefulofbeing
      Link Parent
      From the ProPublica article on scanners, summarized by ChatGPT:

      From the ProPublica article on scanners, summarized by ChatGPT:

      Body scanners deployed by the Transportation Security Administration (TSA) have come under scrutiny following reports that they are less effective than previously thought. Government officials and others have discovered weaknesses and vulnerabilities in the scanners, which are designed to prevent terrorists from boarding planes with explosives. However, the TSA remains convinced that body scanners are the best technology available, having supposedly found an impressive number of dangerous or illicit items. The issue is complex and contentious, as people are concerned not only about security but also their privacy and safety.

      1. specwill
        Link Parent
        It's funny that ChatGPT both-sides the article when there's a preponderance of evidence the x-ray scanners are useless. Like, one study found they could be fooled by just not packing your...

        It's funny that ChatGPT both-sides the article when there's a preponderance of evidence the x-ray scanners are useless. Like, one study found they could be fooled by just not packing your explosives as a brick.

        4 votes
  4. [4]
    AgnesNutter
    Link
    This is already quite common in other countries, no? I wonder if it’s the same programme or one newly made; if it’s the same programme then many of these concerns have been answered elsewhere (eg...

    This is already quite common in other countries, no? I wonder if it’s the same programme or one newly made; if it’s the same programme then many of these concerns have been answered elsewhere (eg effectiveness on darker skin).

    2 votes
    1. Bluebonnets
      Link Parent
      Yeah I wonder if it’s similar. When we went to Australia a few years ago we didn’t have to talk to anyone coming in, just had our faces and ID scanned by a computer and it pushed us through....

      Yeah I wonder if it’s similar. When we went to Australia a few years ago we didn’t have to talk to anyone coming in, just had our faces and ID scanned by a computer and it pushed us through.

      Actually, that’s not true - I got pushed through. My husband had to have someone confirm his ID because his passport was almost 10 years old by then and he had more hair than he does now…machine didn’t recognize him haha. To be fair, the customs agent laughed and said he’d barely recognize him too.

      1 vote
    2. [2]
      boxer_dogs_dance
      Link Parent
      I would love to know whether it is the same program. But also the question might have just been ignored by European countries... It would be good to learn more in detail

      I would love to know whether it is the same program. But also the question might have just been ignored by European countries... It would be good to learn more in detail

      1. AgnesNutter
        Link Parent
        Yes it might have been, but it’s been use long enough that these issues would surely have been reported somewhere. In australia it’s been in use for 7 or 8 years now, I think. Plenty of time to...

        Yes it might have been, but it’s been use long enough that these issues would surely have been reported somewhere. In australia it’s been in use for 7 or 8 years now, I think. Plenty of time to become aware of things like racial bias

  5. ComicSans72
    Link
    I'm pretty in favor of this at airports. You're matching photos against a live person in well lit conditions with (ok) cameras in both places. And they can just manually review any flags so you're...

    I'm pretty in favor of this at airports. You're matching photos against a live person in well lit conditions with (ok) cameras in both places. And they can just manually review any flags so you're not increasing any real false arrest rate or anything (theoretically). You're just speeding shit up for everyone else.

    I go through customs a lot though and it's just such a parade of people staring at photos and pretending to read passports before stamping things.

  6. Comment removed by site admin
    Link