On August 2, 2026, the EU AI Act starts treating any system that screens, scores, or filters job candidates as high-risk. Annex III, point 4. The text is short, the obligations are not: human oversight before a rejection, audit logs, bias testing, transparency to the candidate. Get it wrong and the ceiling on the fine is up to €15M or 3% of global annual turnover.
That number is what travels in headlines. It is not the part I keep thinking about.
What I keep thinking about is the shape of the workflow on both sides of that hiring funnel, and how strangely the regulation sits on top of it.
Two sides of the same conversation
Candidates apply with AI-generated resumes, tailored per job, sent in bulk. Companies receive them and run them through an AI-powered ATS that ranks, filters, and in many cases auto-rejects. Somewhere in the middle, two language models are doing most of the actual reading. The humans on either end mostly review summaries.
It is not obvious to me what we are even calling “hiring” at this point. The original loop, where one person wrote a letter and another person read it, has been replaced by something that looks more like two automated trading desks exchanging quotes, with a compliance officer expected to inspect every closed trade.
That is the workflow the Act is trying to regulate. And it only regulates one half of it.
The asymmetry
Employers must put a human in the loop before any rejection. They must keep audit logs. They must be able to explain, in plain language, why a candidate was filtered out.
Applicants face no such restriction. Nothing in the Act limits how many applications a single person, armed with one AI agent, can submit. Nothing requires those applications to be authored by a human, or labeled as machine-generated, or rate-limited.
The asymmetry is not a loophole. It is a deliberate choice. The Act regulates AI systems that affect people’s lives, and an applicant’s tool affects only the applicant’s own life. The employer’s tool decides someone else’s. That distinction is principled. I understand why the line was drawn there.
The consequence of drawing the line there is what I find interesting.
The capacity problem
Human review capacity, on the employer side, is fixed. You can hire more recruiters, but not at the speed AI tooling improves on the candidate side. So what happens, in practice, when ten thousand AI-generated applications hit a mid-sized company’s ATS in an afternoon?
There are only a few options. You staff up the recruiting team, which is expensive and slow. You raise the bar of what gets through to a human, which may itself be a discriminatory filter. Or you quietly relax the “human reviews every rejection” requirement, because the alternative is hiring nothing for six months while the queue clears.
The third option is the one that worries me. Not because companies are reckless, but because compliance under volume pressure tends to drift. Then a candidate files a complaint. The regulator pulls the audit logs. The logs show what they show.
If you squint, the failure mode looks less like negligence and more like a denial-of-service attack. Not malicious, just emergent: a regulation that requires per-rejection human attention, applied to a system where the attacker side has no such cost. Weaponized paperwork, by accident.
I am not predicting this happens. I am saying the incentive is there, and I do not yet see what absorbs it.
What “affecting human life” actually means
The Act calls a system high-risk when it has “significant impact on people’s lives.” For an ATS rejection, sure. The framing is intuitive.
But the framing is also doing a lot of work. A red light affected my life the day it stopped me next to someone I later married. A missed flight pushed me into a different career. A rejection from company A is the reason I ended up at company B, where I met the person I built things with for the next decade. Almost any consequential system affects lives in this loose sense. The interesting question is which ones we want the law to follow into.
The drafters know this. The definition is intentionally broad to be future-proof. The scope is then narrowed by the explicit lists in Annex III. So in practice, what counts as “affecting life” is whatever made it onto the list, plus whatever the courts will eventually add by analogy. That is fine as legal craft. It also means the philosophical phrase at the top of the Act is not really doing the work it appears to do. The list is.
I do not think this is a flaw. I think it is just how regulation that wants to last more than five years has to be written. But it is worth naming, because the slogan and the mechanism are not the same thing.
What I am actually watching for
Nobody is going to ban applicants from using AI. That ship sailed before the regulation was even printed. So one side of the conversation gets compliance officers and audit logs, and the other side gets a $20 ChatGPT subscription. The shape of hiring on the other end of this is not obvious to me.
A few questions I do not have answers to:
- Will employers respond by tightening before-the-funnel filters that the Act does not regulate, like requiring a referral or a portfolio, just to keep volume manageable?
- Will an “AI-assisted application” disclosure norm emerge on the candidate side, even without a legal requirement, the way “sent from my phone” once did?
- Will the first big enforcement case be about a real bias harm, or about a paperwork failure under volume? The two failure modes have very different policy implications.
- And the deeper one: when regulation only fits one side of an automated interaction, does it slowly push that interaction off the regulated surface entirely, into informal channels?
I do not have a clever closing line for this. Honestly, I would just like to see how these laws work once they are in contact with the actual workflow they describe. Some of what I have written above might look obvious in two years and some might look completely wrong, and either outcome would teach me something.
If you work on the employer side and have started preparing for August, I would be interested to hear what your team is actually changing, and what you have decided to leave alone.
Comments