top of page
Search

AI Was Supposed to Fix Hiring. Instead, It’s Putting Your Talent Pipeline at Risk

  • Writer: Anoushka Bold
    Anoushka Bold
  • Nov 19, 2025
  • 3 min read

AI was supposed to make hiring smarter, faster, and fairer. Instead, the biggest AI discrimination lawsuit in recruitment history is now moving forward and HR leaders everywhere should be paying attention.


The Mobley v. Workday case isn’t just about legal exposure. It’s a wake-up call about what happens when AI is used in hiring without the guardrails and human oversight. Furthermore, the real risk is that your organisation’s future talent pipeline is being quietly narrowed, shutting out the diversity that drives innovation, problem‑solving, and business performance.


  • Companies in the top quartile for gender diversity are 39% more likely to financially outperform their peers.

  • Companies in the top quartile for ethnic diversity are 39% more likely to outperform the bottom quartile.

  • Organisations with diverse leadership teams show stronger innovation, resilience, and decision‑making quality.


If AI tools screen out diverse candidates, even unintentionally, you’re not just facing potential discrimination claims. You’re losing the very diversity that fuels performance.



This blog breaks down:

  • What the Workday AI lawsuit is actually about

  • Why this has significant implications for HR and talent strategy

  • What actions HR leaders must take now to protect fairness and business performance



The Case that is shaking the HR Tech World

Workday’s AI screening tools, 'Candidate Skills Match' and 'Assessment Connector', were designed to help employers match candidates to roles more efficiently.


The lead plaintiff, an African American, disabled job seeker over 40, says he was repeatedly rejected from companies using Workday’s AI tools. He claims the algorithm caused unintentional but systemic discrimination.


The court hasn't just allow the case to proceed. It has granted conditional certification for a collective action. Meaning:

  • Anyone aged 40+ rejected by Workday’s AI tools since September 2020 can join.

  • 1.1 billion applications were rejected in that period by Workday..

  • That’s potentially hundreds of millions of applicants.


Whether the allegations are proven or not, the message is clear:

Courts are treating AI discrimination seriously and both HR tech and employers have skin in the game

The Real Issue: AI Isn’t Removing Bias, It’s Scaling It

HR teams hoped AI would reduce bias. But when algorithms learn from biased historical data, they don’t erase bias, they automate it.


HR and Talent aquisition leads are feeling the AI impact:

  • Exponential explosion in number of applicants

  • Pressure to use AI automation or drown in applications

  • CVs polished by ChatGPT that look identical

  • Interview answers that sound scripted or AI-generated

AI was meant to help. However, it’s creating new problems at scale.


Three Critical Lessons for HR Leaders


1. You’re Still Responsible. Even If the AI Made the Decision

Many organisations assume that if a vendor’s algorithm does the screening, the liability sits with the vendor. It doesn’t.

Employers remain accountable for discriminatory outcomes, even when using third‑party tools. AI may do the scoring, but HR owns the result.


2. Bias Audits Are No Longer Optional

Whether required by law or not, regular AI bias audits are becoming a must-have.

If your tools are disproportionately screening out protected groups, you're compromising your talent pipeline, building legal risk, and negatively impacting business performance.


3. AI Should Be a Co‑Pilot, Not the Pilot

AI can support your recruitment process. But it should never replace human judgement. A human reviewer can:

  • Reassess candidates flagged as “low match” due to unconventional CVs

  • Identify false negatives from algorithmic assumptions

  • Add critical context that AI can’t interpret

AI accelerates decisions. Humans make them fair.


Five Questions Every HR Leader Should Be Asking Their Vendors

  1. How do you audit your AI for bias?

  2. What data trains your models?

  3. Can you explain how scores are generated?

  4. How often are models tested, updated, or retrained?

  5. What human oversight is built into the system?

If a vendor can’t answer these clearly? That’s a red flag.


What HR Teams Should Do Now

Here’s your immediate action list:

  • Review your hiring diversity outcomes

  • Run internal or third‑party bias audits on your existing tools

  • Train your TA and HR teams on AI bias and disparate impact

  • Add governance checkpoints before adopting new tech

  • Notify candidates when AI is used in the process and give candiates the ability to opt out of AI processing

  • Refresh your hiring policies to document AI usage and oversight

Proactive is always better than reactive.


The Bottom Line

The Workday case isn’t about one company. It’s about the future of AI in hiring. AI is here to stay, but so is your responsibility to ensure candidate fairness, and prioritise business performance.


HR leaders who get ahead of AI governance now will:

  • Avoid legal risk

  • Maximise diverse talent

  • Build fairer processes

AI doesn’t replace good HR. It depends on it.


Need more context?



Bold Consulting Logo

Bold Consulting is a boutique advisory supporting CEOs and HR leaders across strategy, organisation design, and leadership performance. We help organisations create clarity, accelerate transformation, and unlock meaningful, measurable impact.



 
 
 

Comments


bottom of page