42
votes
Concerns about new facial recognition software implemented by TSA at US airports
Link information
This data is scraped automatically and may be incorrect.
- Title
- Why new facial-recognition airport screenings are raising concerns
- Published
- Jul 11 2023
- Word count
- 1184 words
On the one hand, I'm deeply unhappy that the TSA exists and would like to see the entire agency disbanded.
On the other, I went through this particular system recently and didn't find it all that bothersome. Replacing the fallible human method of "yep, that looked like you" with a more systematic matching program is more consistent and more fair.
I could raise concerns about my picture being stored, but again that's a moot point because as soon as I'm in the airport I'm already under camera and I'm presenting a document with my photo on it. So that particular battle is already lost.
The bias that might be introduced by a program seems like a small fraction of the bias that every human would introduce based on if they're irritable, tired, or just bad at reading faces.
So what is there to object to? I would certainly like to see an audit of this program that proves no photos are being stored permanently - as the posters promise - but even if that were true, it still doesn't mean much when I remember they've got airport camera footage every time I get to my terminal.
The view that the computer is less biased is part of what makes it potentially worrisome. It's quite hard to argue for your innocence when you're up against a closed-source algorithm that is perceived to be free of human error.
Hmmm, good point. I hadn't thought of that.
Fortunately the TSA checkpoints are still staffed with people and I doubt that will change any time soon - but that's a particularly horrifying possibility in all this automation, I agree.
Therein lies the issue: that perception should never exist.
I am far from expert, but I believe part of the concern is from proportionally fewer ethnic minorities in training datasets leading to the machine being disproportionately inaccurate re members of minority groups.
There are similar concerns re women's health care because of a historic lack of women participating in drug trials. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4800017/
I share your opinion of the TSA. Watching that whole system being rolled out so quickly in response to 911 2001 changed my perspective about politics forever. The system and its supporters were ready, waiting for an emergency in order to implement it. I miss meeting people at the gate as they exit the plane, among other things.
From what I've heard, some of it stems from the fact that there's less contrast information to be gained from a dark face.
It's not a racial bias issue, but an issue of camera contrast throttling. A good solution would be something like the Walgreens passport photo area, where everyone is guaranteed to get the same consistent lighting and a photo taken with HDR cameras.
Do you have a source for that? Because from everything I have read about the issue, it's not cameras or lighting that causes the problems, it's the training sets and developers lacking diversity that causes it:
https://www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/
But that is not the only issue with them. Even if a facial-recognition algorithm itself wasn't biased the results of using them likely still would be. From the same article:
The racial bias argument comes from the fact that you already thought of a good solution without working on the project. Or the fact the system can be ready for deployment with flaws that affect a group more than others.
My friend worked on solving racial bias problems in face recognition for a big tech company. From what he’s told me, it’s a training data problem. Tech companies first scanned who was convenient and that meant more light faces than dark and encoded tech hiring biases into the system. The solution was to go and capture face data in places like Africa to remove training biases.
The problem is that it makes the already invasive ID checks convenient and unconscious. As with every part of our surveillance state that any agency rolls out, this is just more normalization of our lack of privacy.
It's never long before the technology is refined and rolled out into our everyday lives (usually via the police) justified by some shallow "public safety" measure.
Face recognition on the highways to catch bad guys and save kidnapped kids. At the hospital to pull up medical records to save lives. Schools for god knows what, it never actually matters. Public sector vendors eventually branch off and make a commercial product, and pretty soon it's impossible to sneak into a concert without a ticket, stores pull your shopping data as you walk through, and your every physical move gets added to your advertising ID.
There's a huge difference between showing up in a massive CCTV backup and having your name tagged and searchable to that footage and all other footage in one database.
Those "we'll delete all the footage" claims don't mean anything either if they just pull a log that says "so and so was here at this time stamp" and then delete the raw footage immediately after. The tracking of me is the invasive part, not just the recording.
The issue is the consequences of the bias. The human with a stronger bias is problematic, but an AI with a weaker bias connected with a database of thousands of similar-looking people is much more likely to falsely identify someone.
This is huge.
If a human agent pulls you over because they're suspicious, there is still some kind of accountability where the agent will have to answer why they are suspicious. If an agent pulls over 1000 Asian women for no reason, there's at least some perceived accountability. If a machine dings 1000 Asian women in a row we are just going to have agents shrug and say sorry my hands are tied.
AI isn't magic. It works on training data and it means bias is everything. Imagine if someone of your description commits an atrocity and now you are machine flagged every time you drive and every time you travel. A human cop in your area might recognize you after the 10th stop and wave you past. A human cop using a scanner will have to say, sorry dude you know the drill.
In this scenario, shouldn't the goal be to improve upon the algorithm's training sets instead of scrapping the idea altogether? As the adage goes, garbage in garbage out.
I also wish the article would have brought up the likelihood of misidentification when using human recognition vs. an algorithmic one. Flawed as the algorithm is, it might actually be an improvement on the status quo.
Accountability only ends when people let it. If action can be taken against a biased human agent, the same applies for an flawed algorithm.
Where are they going to get good training sets? Are they going to pay us humans fairly to tag ourselves, or are they going to steal our data and have Clickworkers identify us for pennies?
My point is that even if humans make more mistakes, humans are able to bear the responsibility for their biases, whereas there's no blame for a machine. Machine dings are going to be be used as "probable cause" and be perceived as bias free and more accurate, when in fact their biases will be invisible and untraceable.
Accountability ends when people let it, and this is precisely one of those fronts we need to fight. Can you imagine if we allow the use of poorly trained medical diagnostic machines that get surgery wrong on women and minorities 100 times worse? Then why the heck should we allow flawed machines into service before they have proven to be ready?
By no means am I saying that the facial recognition system should be deployed in its current state. Nor am I saying it would be simple to get unbiased training sets. If utilizing an algorithm can make the ID process more efficient, then surely some of the manpower can be redirected towards improving it.
Given that research exists to point out bias in the algorithm mentioned in this article, there's clearly a method to track and review output. By definition then it is not invisible and untraceable. The perception that an algorithm is infallible isn't the fault of an algorithm, but a misconception by those employing it.
It may be clearer who to point fingers at when a human exhibits bias compared to an algorithm, but the end goal isn't to know who to blame. Reducing the amount of bias is. The software cited in this article might not be ready for use, but that doesn't mean the use of software for passenger ID is inherently flawed.
Nah better to throw the whole thing out. The TSA has always been an affront to liberty, and proven largely an ineffective nuisance.
Remember when the TSA was like, "Let's expose a bunch of people to x-rays without considering the health implications from scanners that don't even do their job?"
This is an agency that has accrued far more failures than successes trying to justify its budget and existence, conducting another experiment on the public regardless of its effects, effectiveness, or broader implications.
From the ProPublica article on scanners, summarized by ChatGPT:
It's funny that ChatGPT both-sides the article when there's a preponderance of evidence the x-ray scanners are useless. Like, one study found they could be fooled by just not packing your explosives as a brick.
This is already quite common in other countries, no? I wonder if it’s the same programme or one newly made; if it’s the same programme then many of these concerns have been answered elsewhere (eg effectiveness on darker skin).
Yeah I wonder if it’s similar. When we went to Australia a few years ago we didn’t have to talk to anyone coming in, just had our faces and ID scanned by a computer and it pushed us through.
Actually, that’s not true - I got pushed through. My husband had to have someone confirm his ID because his passport was almost 10 years old by then and he had more hair than he does now…machine didn’t recognize him haha. To be fair, the customs agent laughed and said he’d barely recognize him too.
I would love to know whether it is the same program. But also the question might have just been ignored by European countries... It would be good to learn more in detail
Yes it might have been, but it’s been use long enough that these issues would surely have been reported somewhere. In australia it’s been in use for 7 or 8 years now, I think. Plenty of time to become aware of things like racial bias
I'm pretty in favor of this at airports. You're matching photos against a live person in well lit conditions with (ok) cameras in both places. And they can just manually review any flags so you're not increasing any real false arrest rate or anything (theoretically). You're just speeding shit up for everyone else.
I go through customs a lot though and it's just such a parade of people staring at photos and pretending to read passports before stamping things.