Steve Sailer

Steve Sailer

Share this post

Steve Sailer
Steve Sailer
Rozado: AIs are biased against male job applicants
Copy link
Facebook
Email
Notes
More

Rozado: AIs are biased against male job applicants

Given two resumes identical except for male or female first names, large learning models will pick the woman 57% of the time.

Steve Sailer's avatar
Steve Sailer
May 20, 2025
∙ Paid
32

Share this post

Steve Sailer
Steve Sailer
Rozado: AIs are biased against male job applicants
Copy link
Facebook
Email
Notes
More
22
2
Share

David Rozado, a professor in New Zealand, does wonderful Big Data analyses of current biases. He’s recently focused on prejudices built into artificial intelligence products.

In his latest study, he looks at 22 popular AI large learning models [LLMs] to see how they do at evaluating job applicants with identical resumes. All of them, it turns out, were biased in favor of hiring candidates whose only difference were their female first names:

Averaging across all 22 products, the AIs chose otherwise identical resumes sporting female first names 56.9% of the time.

On average, the 22 AI products were biased toward women in all 70 professions tested, including jobs like roofer, landscaper, plumber, and mechanic that virtually no women want:

Other biases that Professor Rozado found:

AIs are slightly biased in favor of resumes with preferred pronouns, choosing resumes with pronouns 53% of the time.

AIs are highly biased toward picking the first candidate proposed of a matched pair: 63.5% of the time, the average LLM picks the first candidate listed in the prompt.

The more advanced models that use more compute time and claim to reason more were just as biased in favor of women.

Rozado concludes:

The results presented above indicate that frontier LLMs, when asked to select the most qualified candidate based on a job description and two profession-matched resumes/CVs (one from a male candidate and one from a female candidate), exhibit behavior that diverges from standard notions of fairness. In this context, LLMs do not appear to act rationally. Instead, they generate articulate responses that may superficially seem logically sound but ultimately lack grounding in principled reasoning. Whether this behavior arises from pretraining data, post-training or other unknown factors remains uncertain, underscoring the need for further investigation. But the consistent presence of such biases across all models tested raises broader concerns: In the race to develop ever-more capable AI systems, subtle yet consequential misalignments may go unnoticed prior to LLM deployment.

How come?

Paywall here.

Keep reading with a 7-day free trial

Subscribe to Steve Sailer to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Steve Sailer
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More