AI Matching in Recruitment: Why Generic AI Isn’t Enough

AI is transforming hiring. But not all AI is created equal. The Workday lawsuit shows what can go wrong when generic AI is used to screen candidates. This blog explores the difference between traditional hiring, generic large language models (LLMs), and specialized AI for talent matching – and what HR should do next.


In the age of AI, hiring is no longer about simply reading résumés or posting jobs. From chatbots to automated matching systems, AI is now deeply embedded in how companies find and select talent. But as we learned from the recent Workday lawsuit, the way AI is trained and used matters – a lot.

In this blog, we’ll explore three levels of matching in recruitment:

  1. Human-based and keyword-driven matching
  2. Generic AI-based matching with LLMs like ChatGPT
  3. Specialized matching with trained AI models built for the labor market

And we’ll explain why only one of these is fit for responsible, fair, and future-ready hiring.


1. Traditional Matching: Biased but Explainable

Before AI, recruiters relied on manual CV screening and gut feeling. Early digital tools helped filter candidates using simple rules: for example, “show me all who mention ‘Excel’.”

Pros

  • Transparent and intuitive
  • Recruiters can consider personality and context

Cons

  • Limited scalability
  • High risk of unconscious bias
  • Easily overlooks non-standard candidates

2. LLM-Based Matching: Impressive but Misleading

With the rise of generative AI like ChatGPT or Claude, many platforms now offer instant job matching or résumé rewriting. Sounds smart, right?

But here’s the catch: LLMs are not matching engines. They’re trained to predict words – not to evaluate skills, competencies, or job-market relevance. Ask them to match someone to a role, and you’ll likely get something plausible-sounding, but statistically shallow. Ask AI for skills and you will get skills, but ask again and you will get different skills. How to start calculating/matching and how to explain te skills gap?

LLMs don’t understand the labor market. They don’t know that “logistics coordinator” and “supply chain analyst” share 70% of required skills. They don’t remember which skills typically lead to success in a specific job. And crucially: they have no ground truth.

That’s not just a technical problem – it’s a compliance issue.


3. Specialized AI Matching: Purpose-Built, Explainable, and Compliant

The third category is where real progress happens. These are AI systems trained on millions of job descriptions and CVs, designed specifically for the labor market. They don’t generate text; they detect skills, predict missing competencies, and show how someone matches a job – or how they can close the gap.f

These models do more than keyword matching. They can:

  • Interpret context (e.g., “led workshops” implies training skills)
  • Predict hidden skills based on job history
  • Identify missing but trainable skills
  • Suggest lateral moves or career steps
  • Operate in line with GDPR and the upcoming EU AI Act: like explaining matching scores and mitigating bias.

That’s the kind of explainable AI recruiters – and regulators – are calling for. (I would love to tell your more about these systems or give a demo. Just leave me a DM.)


The Workday Lawsuit: A Wake-Up Call

In the United States, Workday is now facing a major collective action lawsuit. The claim? That their AI-based screening software discriminated against applicants over 40 and possibly other groups, leading to automatic rejections without explanation.

A federal judge allowed the case to proceed under the Age Discrimination in Employment Act. The scope? Potentially millions of candidates who applfied through Workday since 2020.

Even if Workday didn’t intend to discriminate, the structure of its algorithm may have amplified existing bias in hiring data – and failed to detect it.

This is exactly what the EU AI Act wants to prevent: black-box systems that can’t explain decisions, reproduce bias, and affect people’s lives at scale.


What Should HR Do?

HR professionals are under pressure to modernize – but also to protect. The solution is not to stop using AI. It’s to use the right kind of AI.

Here’s how to do it responsibly:

  • Use specialized AI built for job-market data, not general-purpose LLMs
  • Audit your matching process for bias and explainability
  • Combine AI suggestions with human oversight
  • Make sure candidates understand how and why decisions are made
  • Prepare for EU AI Act compliance now – don’t wait for 2026

In Summary

AI can match people to jobs better than humans ever could. But only if it’s trained on the right data, designed for the task, and transparent enough to be trusted.

Generic AI can assist with writing. Specialized AI can predict success in a role.

If you want to match talent to opportunity – fairly, fast, and at scale – don’t just ask an LLM.

Ask an expert.