- Formidable
- Posts
- Study finds rampant discrimination in AI hiring models
Study finds rampant discrimination in AI hiring models
Study shows AI hiring tools prefer white male names over Black names 100% of the time.
What you probably already know: As artificial intelligence is increasingly used to screen job candidates, many have expressed concerns that it could reinforce bias that was already a pernicious part of the hiring process. Now, a new study published by the University of Washington has found that AI tools used to screen candidates for job openings dramatically favored white men over other genders, races and ethnicities. The study, published by Kyra Wilson and Aylin Caliskan, found that the Massive Text Embedding models, which is what these systems are called, preferred white-associated names in 85% of cases and only preferred women’s names in 11% of cases. For Black male names, the MTEs rated them lower in 100% of cases, regardless of qualifications.
Why? AI is everywhere in the hiring process now. An estimated 98% of Fortune 500 companies use AI in their hiring processes, so this finding should alarm business leaders. Using AI does not absolve companies from requirements under the Equal Employment Opportunity Commission, and a separate study recently found that many of the most popular AI screening programs had very little built into their systems to prevent discrimination. While they nearly all made claims that their systems would reduce bias, most provided little to no evidence of how, exactly, they were doing that.
What it means: While President Donald Trump has sought to end DEI practices, including in the hiring process, discrimination remains illegal in the U.S. According to the UW study, intersectionality is the biggest challenge for job seekers: When the study looked at gender and race together, Black men fared worst, followed by Black women, and white men were selected 100% of the time when put up against Black men or women.
What happens now? The study’s authors suggest that simply training the models to be less bias isn’t the answer. Names, of course, do not always signal gender or racial identity. Sometimes there are clues in a person’s word choice or background that can then result in the model ranking them lower. For example, in 2018, an AI tool Amazon was using was discriminating against the graduates of all-women colleges, inferring that the applicant was female and thus not a match for its predominantly male developer workforce. More studies need to be done but hiring managers should be extra vigilant when using AI tools in the hiring process and understand that they are imperfect at best — and potentially discriminatory at worst.