If your business uses AI to screen, rank, or match candidates, the EU now regulates those tools as high-risk systems. Here is what changed, what it means for your operating model, and what you should be doing about it.
Summary:
- The EU AI Act covers AI systems used in employment decisions, including recruitment, selection, targeted job advertising, candidate evaluation, performance monitoring, and certain decisions about compliance, contract terms or termination.
- Both providers and deployers of such AI systems are subject to obligations under the Act.
- Obligations include mandatory risk assessments, technical documentation, bias testing, human oversight, transparency disclosures, and continuous monitoring.
- The AI Act may apply to deployers even if they are not based in the European Union.
- The date by which staffing businesses must comply with the Act is 2 August 2026.
- The Act provides for national authorities’ fining powers, as well as other enforcement powers, such as the power to withdraw or recall AI systems from the market.
- Certain, but not all, AI systems deployed in the employment context may be exempt from obligations under the Act.
GDPR required you to rethink how you handle personal data. The EU AI Act requires you to rethink how you use the tools that process it.
Under the EU AI Act (Regulation 2024/1689), AI systems used in employment decisions fall into the high-risk category. That covers recruitment, selection, targeted job advertising, candidate evaluation, performance monitoring, and certain decisions about compliance, contract terms or termination. Starting 2 August 2026, each of those tools will need mandatory risk assessments, technical documentation, bias testing, human oversight, transparency disclosures, and continuous monitoring.
For staffing businesses, Employers of Record (EORs), and workforce platforms, which sit between employers, technology, and workers, the obligations are more demanding than they are for a single corporate HR department. And it does not matter whether you built the technology. If your business deploys it, compliance is your responsibility, even if the platform vendor says otherwise.
Why the staffing supply chain makes this harder
Most commentary on the AI Act’s employment provisions is written for in-house HR teams at single employers. That misses the reality for staffing businesses.
Think about the typical staffing supply chain. A Vendor Management System (VMS) platform uses algorithmic matching to surface candidates. A Recruitment Process Outsourcing (RPO) provider runs AI-powered screening across thousands of applicants. A staffing agency deploys chatbot pre-qualification. An EOR uses AI to manage onboarding, compliance, termination and performance across multiple jurisdictions. At every stage, AI systems are making or influencing decisions about people’s livelihoods, and the Act’s obligations for deployers do not distinguish between entities in the chain based on who owns the technology.
Under Article 3 of the Act, a “deployer” is any natural or legal person using an AI system under its authority. If your staffing firm selects, configures, or relies on an AI tool to inform workforce decisions, you are a deployer, even if you did not build the technology and even if the platform vendor tells you compliance is their responsibility. The Act assigns obligations to both providers (the vendors who build the systems) and deployers (the businesses that use them). You cannot pass your compliance obligations to a technology partner any more than you can under the GDPR.
Extraterritorial reach
The Act has extraterritorial reach. If the output of your AI system is used in the EU or it affects persons located in the Union (a candidate screened for a role in Berlin, a contractor evaluated in Dublin, a temp worker matched to an assignment in Amsterdam), the regulation applies regardless of where your company is headquartered or where the technology is hosted.
Human oversight is not optional
Every high-risk AI system must be used in a way that allows effective human oversight. No AI tool should make final placement, rejection, or evaluation decisions without a qualified human in the loop. Your recruiters and account managers need to understand how the system works, what its limitations are, and when to override its outputs. Article 14 requires the people exercising oversight to detect and correct errors, including discriminatory patterns. A policy document alone does not satisfy this.
Candidates and workers must be told
Before deploying a high-risk AI system, Article 26(7) requires you to inform workers’ representatives and affected workers. In staffing, this extends to candidates and contingent workers. They have a right to know that AI is being used, how it functions, and what role it plays in decisions that affect them. Under Article 86, individuals subject to decisions made by high-risk AI systems can request an explanation of the main factors behind those decisions. For high-volume recruitment, your disclosure process needs to be operational and visible, not buried in a terms-of-service document.
Data quality and bias monitoring need real attention
If you exercise control over input data fed into high-risk AI systems, you must ensure that data is relevant and representative. Staffing agencies, where candidate pools are often skewed by geography, language, or existing network effects, face a particularly substantive version of this obligation. You need to know what data your AI tools are trained on, how they handle protected characteristics, and whether they produce equitable outcomes across demographics.
Logs and documentation are an infrastructure requirement
Deployers must keep logs generated by high-risk AI systems for at least six months. Combined with the requirement to monitor system performance on an ongoing basis, this creates an operational infrastructure need that many staffing businesses have not yet scoped.
The timeline
See the full implementation timeline for the EU AI Act
The Act entered into force on 1 August 2024. Certain provisions already apply. Since February 2025, some AI practices have been prohibited, including biometric categorization and emotion recognition in the workplace (with some excluded use cases). Also, AI literacy obligations became effective.
The main date for staffing businesses is 2 August 2026, when the full suite of high-risk system obligations becomes enforceable for Annex III systems, including all employment-related AI.
Some industry commentary has suggested that the European Commission’s Digital Omnibus package, proposed in November 2025, will push this deadline back. But this is a proposal, not enacted law. It must pass through Parliament and Council.
The penalty framework
The Act’s enforcement structure follows a tiered model. For deployers who fail to meet their high-risk system obligations, fines can reach up to EUR 15 million or 3% of global annual turnover, whichever is higher. For use of prohibited AI practices, the ceiling rises to EUR 35 million or 7% of turnover. For providing incorrect or misleading information to regulators, up to EUR 7.5 million or 1%.
National market surveillance authorities, not a single EU-wide regulator, handle enforcement in regard to AI systems. In January 2026, Finland became the first member state to confer enforcement powers on its market surveillance authority pursuant to Article 99 AI Act, rendering these powers fully operational. Other member states are following. This decentralised model means enforcement priorities and interpretive approaches may differ across member states. For multi-country staffing operations, that variation creates planning challenges and, for those who engage early with national regulators, potential advantages.
The fine itself, though, is often the wrong thing to focus on. Regulators also have the power to withdraw or recall non-compliant AI systems from the market. For a staffing business whose operating model depends on technology-enabled matching and screening, that is the more commercially significant risk: a core tool pulled mid-contract, with immediate operational disruption.
Some of your tools might be exempt. Most of them probably aren’t.
There is a provision in the Act—Article 6(3)—that gets very little attention in industry commentary, and staffing operators should know about it. It sets out four conditions under which an AI system used in an employment context might fall outside the high-risk classification, even though it operates in a high-risk area.
The system performs a narrow procedural task, such as sorting incoming documents into categories or flagging duplicates in a batch of applications. Or it improves the result of a previously completed human activity, like cleaning up language in a drafted contract. Or it detects patterns in prior human decisions, such as flagging inconsistencies in a manager’s past performance ratings. Or it performs a purely preparatory task: indexing, searching, or translating source material before a human makes a decision.
However, Article 6(3), read together with Recital 53, explicitly states that none of those exemptions apply if the AI system involves profiling within the meaning of Article 4(4) GDPR. Profiling means any automated processing of personal data that evaluates personal aspects of an individual, including analysing or predicting work performance, reliability, behaviour, or location.
Most candidate matching tools, ranking algorithms, and workforce allocation systems do exactly that. They take personal data, apply automated logic, and produce predictions about suitability or fit. That is profiling, and it renders the exemption unavailable in these cases.
The practical implication: when you run your AI inventory and start classifying systems, you may look at the Article 6(3) exemptions and assume several of your tools are outside scope. For the tools that handle procedural or preparatory tasks, that may be correct. For anything that matches, ranks, evaluates, or allocates workers based on personal characteristics, it almost certainly is not.
This is a competitive question, not just a compliance one
The most useful way to think about the EU AI Act is as a market-shaping event that will separate operationally mature staffing businesses from those running on unexamined technology.
Enterprise clients with EU operations, particularly in regulated industries, are already building AI governance into their vendor selection criteria. Staffing businesses that can demonstrate compliant AI practices, transparent candidate processes, and documented oversight frameworks will have a measurable advantage in RFPs and preferred supplier negotiations.
There is a broader pattern here too. The EU AI Act is the first major AI regulation of its kind, but it will not be the last. Similar frameworks are taking shape in the UK, Canada, South Korea, Brazil, Taiwan, and at state level in the US. Investing in AI governance infrastructure now builds capacity that transfers across jurisdictions. The businesses treating compliance as a one-off project are solving for today. The ones building governance into their operating model are solving for the next decade.
What to do now to prepare your business
1. Map every AI system your business uses that touches candidate or worker decisions: screening, matching, ranking, chatbots, performance analytics, scheduling algorithms. Include tools embedded in third-party platforms you deploy. You cannot comply with obligations you do not know you have.
2. For each tool, determine whether you are the deployer, whether the system falls within Annex III’s employment category, and who the provider is. Document this in a way your compliance and operations teams can work from.
3. Contact every AI vendor in your stack and ask specific questions: Are they aware of the EU AI Act? Are they pursuing conformity assessment? Can they provide technical documentation, bias audit results, and usage logs? Will they contractually commit to supporting your deployer obligations?
4. Identify who in your organisation will oversee each high-risk AI system. Make sure those people have the training and authority to understand, monitor, and override AI outputs. Document the process. AI literacy obligations are already in effect, so this is a current requirement.
5. Draft the notifications, disclosures, and explanation frameworks you will need to inform candidates and workers about AI use. In a labour market where candidate experience drives placement volume, being clear and upfront about how you use technology is a differentiator, not a burden.
The staffing businesses that will handle this transition best are the ones that start now, move methodically, and treat AI governance as an operating capability rather than a legal exercise. The EU AI Act is the clearest signal yet that the era of unexamined AI deployment in workforce decisions is ending. For operators willing to get ahead of it, the reward is not just compliance but the trust of clients and candidates who increasingly expect it.
This guest post was contributed by Nazareth & Partners. Contact us if you’re interested in submitting a guest post.
Nazareth & Partners advises staffing agencies, EORs, MSPs, RPOs, and VMS platforms on cross-border compliance, including AI governance and EU regulatory readiness. This article is for informational purposes and does not constitute legal advice.