The promise and the problem

The pitch for AI in recruiting was straightforward: companies are drowning in applications, human reviewers are inconsistent, and automation can bring speed and objectivity to a process that has historically been prone to both bias and inefficiency. That case is not wrong. The volume problem in modern recruiting is real. A single job posting at a major employer can generate thousands of applications. Something has to filter that pile.

But the way most organizations have implemented AI in their hiring processes has produced a system optimized for a goal that was never stated aloud: reduce the pile as fast as possible, using whatever criteria are easiest to measure. That's not the same as finding the best candidates. And the gap between those two objectives is where top talent disappears.

"The most sophisticated applicant tracking systems in the world are solving a problem that isn't actually the one companies have. They're solving for volume reduction. The actual problem is quality selection. These are not the same thing, and optimizing hard for the first one actively undermines the second."


How ATS systems actually work — and what they're really doing

Applicant Tracking Systems were originally designed as databases — tools for organizing applications, tracking candidate status, and maintaining records. They were not designed as screening intelligence. But over time, layers of automated filtering got bolted on: keyword matching, minimum threshold requirements, scoring algorithms, and increasingly, machine learning models trained on historical hiring data.

The result is a system that, in most organizations, nobody fully understands. The HR team manages the platform. The IT team may have configured the initial filters years ago. The hiring managers write job descriptions that feed the keyword requirements. And the AI model — where one exists — was trained on a dataset of past hires whose composition reflects whoever the company hired in the past, not whoever the best future candidates might be.

The keyword trap

Most ATS filtering still relies heavily on keyword matching — does the résumé contain the specific words and phrases present in the job description? This sounds reasonable until you examine what it actually selects for. A candidate who describes their experience in plain, accurate language and a candidate who has carefully reverse-engineered the job posting to mirror its exact phrasing will score very differently, despite potentially having identical qualifications. The system rewards gaming. It penalizes authenticity.

More damaging: a candidate who does the same work under a different job title at a company that uses different terminology — common across industries, company sizes, and geographies — may score near zero against a posting that uses a specific industry's vocabulary. The ATS has no mechanism for recognizing that "fleet operations supervisor" and "transportation assets manager" may describe the same role.

The historical bias loop

When AI screening models are trained on historical hiring data — who was hired, who performed well, who was promoted — they learn to replicate the patterns in that data. If a company has historically hired predominantly from certain universities, certain companies, or certain demographic backgrounds, the model will weight those signals positively, not because they're predictive of job performance, but because they're correlated with past hiring decisions. The model doesn't know the difference. It finds patterns and amplifies them.

This is not a theoretical concern. Amazon famously scrapped an AI recruiting tool in 2018 after discovering it had learned to penalize résumés that included the word "women's" — as in "women's chess club" or "women's university" — because its training data reflected a decade of male-dominated hiring in technical roles. The model did exactly what it was designed to do. The design was the problem.


The job description is usually where things go wrong first

Before a single application is submitted, most companies have already significantly narrowed their candidate pool through how they write job descriptions. And increasingly, those job descriptions are written with AI assistance — which, used without care, tends to produce bloated, keyword-heavy postings that specify far more requirements than the role actually demands.

Credential inflation

A widely cited Harvard Business School study found that degree requirements had been added to millions of job postings for roles that had historically been filled without them, despite no evidence that the degree improved performance. This "degree inflation" is partly cultural inertia, partly legal risk aversion, and partly the result of ATS systems that make it trivially easy to set a degree requirement as a hard filter. The practical effect is to eliminate a substantial population of qualified workers — disproportionately from lower-income and minority backgrounds — before they ever have a chance to demonstrate their capabilities.

Experience window mismatches

Job postings routinely require specific years of experience in technologies or methodologies that haven't existed long enough for anyone to have that experience — a phenomenon that became meme-worthy when a 2014 posting required five years of experience in a framework released in 2012. But the subtler version of this problem is more pervasive: experience windows that are set not based on what the role requires, but based on what the last person in the role happened to have. The requirement becomes descriptive of the past rather than predictive of the future.

The real cost

Who gets filtered out by over-specified job descriptions

Career changers with directly transferable skills but different job titles. Candidates from smaller companies where one person wears multiple hats and no single title captures their full scope. High-performers who moved fast through roles and have fewer total years than slower-moving peers with equivalent experience. Candidates who took time away from work for caregiving, health, or other reasons. Candidates from industries with different terminology for identical work. These are often exactly the profiles that outperform in interviews — if they ever get there.


Top candidates leave. Everyone else stays.

There is an asymmetry in how different candidates respond to a poorly designed pre-screening experience that most recruiting teams have never fully reckoned with.

Candidates with strong profiles and multiple options — the candidates companies most want to hire — will abandon a process that feels dehumanizing, time-consuming, or arbitrary. They have alternatives. They know it. A 45-minute chatbot pre-screen with rigid yes/no questions that doesn't account for nuance, or an automated rejection within minutes of applying (signaling that no human ever looked at their application), are meaningful signals to this population that the company's culture may not be worth pursuing.

Candidates with fewer options are more likely to persist through a bad process because they feel they have to. The result is a self-selection mechanism that systematically filters toward less competitive candidates at the top of the funnel — the exact opposite of the outcome that recruiting AI is supposed to produce.

"Every time a highly qualified candidate abandons your pre-screening process, you don't see it. You have no idea it happened. Your ATS dashboard shows a smaller pile and that looks like efficiency. What it actually is, in many cases, is the best applications leaving before you could read them."


The legal dimension is getting real

For years, concerns about AI bias in hiring were largely ethical and reputational. That is changing. Regulators are beginning to catch up to the technology, and the legal risk of unchecked AI screening is becoming concrete.

New York City's Local Law 144, which took effect in 2023, requires employers using "automated employment decision tools" to conduct annual bias audits and disclose their use to candidates. Illinois, Maryland, and California have passed or are advancing similar legislation covering AI-driven hiring practices, particularly around video interview analysis. The Equal Employment Opportunity Commission has issued guidance clarifying that employers remain liable for discriminatory outcomes produced by AI tools — the "the vendor did it" defense does not transfer legal responsibility.

Most organizations currently using AI in their hiring processes have not conducted a formal audit of the demographic outcomes their screening produces. Many couldn't do so without significant effort to extract and analyze data that lives in systems not designed for that kind of reporting. That gap — between what the law is beginning to require and what most companies can currently produce — represents a significant and growing compliance exposure.


The video interview problem deserves its own conversation

AI-powered video interview analysis — tools that assess candidates based on facial expressions, vocal patterns, word choice, and eye contact — represents perhaps the most ethically and legally fraught application of AI in recruiting. Several major vendors have marketed these tools aggressively, claiming their models can predict job performance from a short video clip.

The evidence base for these claims is thin. Independent researchers have found that the features these systems measure — facial symmetry, vocal tone, speaking pace — have no validated relationship to job performance, but do correlate with factors including race, disability status, and neurological difference. Candidates with certain accents, those who are neurodivergent, those who are Deaf and use sign language, and those whose cultural backgrounds involve different norms around eye contact and facial expression are systematically disadvantaged by these tools in ways that are both discriminatory and invisible to the employers using them.

Several major employers have quietly discontinued AI video analysis tools following internal reviews. Illinois banned the unconsented use of AI facial analysis in hiring in 2020. The technology has not improved to the point where its use can be reliably defended — and the candidates it screens out are rarely the ones companies would choose to eliminate if they understood what was actually happening.


What skills-based hiring actually means in practice

The response to credential inflation and keyword filtering that has gained the most traction among progressive talent organizations is skills-based hiring — the idea that hiring decisions should be grounded in what a candidate can demonstrably do, not in what credentials they hold or what keywords appear on their résumé.

This sounds obvious. In practice, it requires a level of rigor in job definition that most organizations have never applied. Skills-based hiring starts with a clear, specific answer to a question that most job postings never actually address: what will success look like in this role in the first six months, and what capabilities are necessary to achieve it? That question, answered honestly, often produces a much shorter and more specific list of actual requirements than the 15-bullet job description that currently goes to market.

The practical implementation varies: structured work samples, competency-based interview frameworks, portfolio review for applicable roles, or brief paid assessments. What these approaches share is that they measure candidates against a defined standard of what the work actually requires, rather than filtering by proxies that may or may not be related to performance.

What organizations getting this right are doing differently


The path forward isn't less AI. It's better-deployed AI.

The answer to badly deployed AI in recruiting is not to abandon AI. The volume problem that prompted its adoption is real and not going away. The answer is to be honest about what the current systems are actually optimizing for, audit whether that objective is producing the outcomes you want, and redesign the places where the gap is largest.

That means treating ATS filter logic, job description standards, pre-screening design, and screening outcome data as things that require active management — not configurations that get set once and forgotten. It means recognizing that the efficiency metrics most recruiting teams report on (time-to-fill, cost-per-hire, funnel conversion) don't capture what matters: whether the person hired was actually the best available candidate for the role.

It also means acknowledging a harder truth: in many organizations, the people closest to the ATS — the platform administrators, the junior HR generalists managing day-to-day operations — do not have the standing or the visibility to challenge filter logic that a VP of HR set up three years ago. Fixing AI recruiting is partly a technology problem. It's mostly a governance problem.

The organizations that get this right will have a meaningful and durable advantage in the talent market. Access to the best candidates — including the ones everyone else's system is filtering out — is a compounding competitive advantage. The gap between companies that take that seriously and companies that don't is likely to widen as the labor market tightens and the marginal quality of hires becomes more consequential.

The résumé that never got read might have been exactly the one you needed.