Artificial intelligence is linked to discrimination against older women. | Unsplash photo
What you probably already know: AI is not always kind or accurate, especially toward older women. Researchers at Stanford University, UC Berkeley and the University of Oxford found that popular AI systems (think ChatGPT or Copilot) as well as content from Google, Wikipedia, IMDB and YouTube reflect and amplify age- and gender-related stereotypes against older women. The study analyzed 1.4 million online images and videos, plus nine large language models, and found that women are systemically labeled as younger than men. When ChatGPT, for instance, created resumes for hypothetical women, it portrayed them as less experienced because of their supposed age. The distortions emerged in both visual content and the language patterns underlying AI tools.
Why it matters: “This kind of age-related gender bias has been seen in other studies of specific industries and, anecdotally, such as in reports of women who are referred to as girls,” says Berkeley Haas Assistant Professor Solène Delecourt, a co-author. “But no one has previously been able to examine this at such scale.” The bias is especially pronounced in higher-status and better-paid occupations, a depiction that doesn’t align with real-world demographics. The team used Chat GPT to generate more than 34,500 resumes across 54 occupations. When the same AI tool evaluated the same resumes, it rated older men higher than older women despite no significant difference in their skills or experience.
What it means: The World Economic Forum says more than 90% of employers already use some form of automation to filter or rank job applications, meaning flawed AI has enormous potential to influence workplace perceptions and hiring practices. Another Stanford University study found that women were more likely to be shown in liberal arts professions as opposed to men, who were much more often seen in science and technology. Other research shows how large language models and image-based AI systems can exhibit bias across language groups and cultural contexts, influencing perceptions of people of color and marginalized communities.
What happens next: The term “responsible AI” is common these days, but researchers warn that unchecked systemic biases could exacerbate inequalities already present in hiring and promotions. The study says developers, employers and policymakers must take a more critical and holistic approach to AI fairness because of biased language embedded deeply in the statistical patterns AI systems learn from the data they collect. Douglas Guilbeault, an assistant professor of organizational behavior at Stanford and a study co-author, says anyone using generative AI must remain “deeply, deeply cautious,” adding that, “There’s a widespread belief that the problem is basically solved. And it’s not.”

