14
votes
Gender, race, and intersectional bias in resume screening via language model
Link information
This data is scraped automatically and may be incorrect.
- Title
- AI overwhelmingly prefers white and male job candidates in new test of resume-screening bias
- Authors
- Lisa Stiffler
- Published
- Oct 31 2024
- Word count
- 834 words
Hardly surprising, as there's no reason we'd expect them to not have the same problems as earlier language models in this respect, which we know are incredibly good at being biased in this way even when we try to eliminate sources of information about race and gender. These are pattern-finding machines, after all -- and these patterns are deeply embedded into our world and our language use. The unfortunate thing is that unlike previous high-profile examples of biased models used for resume screening, I doubt this is going to stop companies from using them for this purpose. Which is just going to compound the issue and further disadvantage those who are already at a disadvantage.
Direct link to paper: https://ojs.aaai.org/index.php/AIES/article/view/31748/33915