All articles
EU AI Act & Compliance
15 March 20269 min read

5 months left: what you need to do now for the EU AI Act deadline of August 2026

On 2 August 2026, the heaviest obligations of the EU AI Act take effect for high-risk AI systems in HR. CV screening, performance assessment, and employee selection all fall under this category. Are you ready?

T
Team Qrio
AI governance & compliance
Share:
EU AI ActAugust 2026High-riskHRComplianceDeadline
EU AI Act August 2026 deadline for HR systems

The clock is ticking. On 2 August 2026, one of the most impactful provisions of the EU AI Act takes effect: full compliance obligations for high-risk AI systems. And HR is at the top of the list.

If your organisation uses AI for screening applicants, assessing performance, scheduling rosters, or making decisions about employment contracts, you have approximately five months to get everything in order.

TipNote: the Digital Omnibus procedure at the European Commission may adjust deadlines. Plan for the original date of 2 August 2026. Consult legal advice for your specific situation.

Which HR systems fall under the high-risk category?

Annex III of the EU AI Act lists high-risk applications. For HR, this concerns AI systems used in:

  • Recruitment and selection: automated CV screening, ranking of candidates, rejection of applicants.
  • Performance management: automatic assessment of employee output, productivity scoring systems.
  • Promotion and termination: AI recommendations about career steps or contract termination.
  • Task allocation and roster planning: automated systems that distribute work schedules or tasks.
  • Employee monitoring: systems that track behaviour, attendance, or engagement and act on it.
TipUnsure whether your systems fall under Annex III? Have this assessed by a legal expert specialising in the EU AI Act.

What the law specifically requires of you

As a deployer (user) of a high-risk AI system, you have seven concrete obligations.

  • Risk management: a demonstrable process to identify, assess, and manage risks throughout the entire lifecycle.
  • Human oversight: always a person who can understand, correct, and if necessary override AI decisions.
  • Data quality: the data the system works with must be accurate, relevant, and representative.
  • Technical documentation: detailed records of how the system works and what its limitations are.
  • Logging and registration: automatic logging of relevant events for traceability.
  • Transparency towards employees: those affected must know that an AI system plays a role in decisions about them.
  • AI literacy (Article 4): employees who work with these systems must be demonstrably trained and assessed.

The fines: why this is serious

The EU AI Act uses a layered fine system. For violations of high-risk system obligations, you risk fines of up to €15 million or 3% of global annual turnover, whichever is higher.

But financial risks aren't the only concern. Reputational damage, loss of customer trust, and legal proceedings from affected employees pose at least as great a risk as the direct fine.

Your action plan for the coming 5 months

Five months sounds like a lot, but compliance trajectories typically take 6 to 12 months. So you really have no time to lose.

  • Month 1 (now): inventory all AI systems in HR processes. Deliver a list with name, vendor, application, and data involved.
  • Month 1–2: classify each application. Is it high-risk? What is the risk to employees? Who is the system owner?
  • Month 2–3: assess your vendors' documentation. High-risk systems must provide technical documentation. Actively request this.
  • Month 2–4: build human checkpoints. Define which AI decisions always require human approval.
  • Month 3–4: train and assess your HR staff on AI literacy (Article 4). Document who was trained, when, and with what result.
  • Month 4–5: set up logging and monitoring. Ensure you can demonstrate in an audit what the system did, when, and on the basis of which data.
  • Month 5: conduct an internal audit check. Can you demonstrate all obligations? Fill the gaps.
TipStart with the systems that have the most impact on individual employees. These pose the greatest risk and require the most attention.

The common mistake: relying on the vendor

Many HR managers assume the AI vendor handles compliance. This is a misconception. The EU AI Act makes a clear distinction between the provider (the vendor that builds the system) and the deployer (your organisation that uses it).

As a deployer, you are responsible for how you use the system, which data you enter, how you organise oversight, and how you inform those affected.

  • Ask your vendor for technical documentation and the conformity declaration.
  • Contractually verify who is responsible for which obligations.
  • Document what you do to fulfil your own deployer obligations.

What AI literacy has to do with this

HR employees who work daily with AI-supported tools must understand what the system does, what its limitations are, and when they need to intervene. This is not a vague expectation, but an explicit requirement in Article 4.

An employee who blindly adopts the output of an AI system does not constitute effective human oversight. Oversight requires understanding.

  • What exactly does the AI system do and how does it make decisions?
  • What errors can it make (bias, hallucinations, blind spots)?
  • How do I recognise a situation where I need to intervene?
  • What are my rights and obligations as a user of this system?
  • How do I report an incident or case of doubt?
TipTraining alone is not sufficient. You must be able to demonstrate that employees understood the material: assessment and registration are mandatory parts of your compliance dossier.

Checklist: are you ready for 2 August?

  • Do you have an inventory of all AI systems in HR processes?
  • Have you determined which systems fall under Annex III (high-risk)?
  • Do you have technical documentation and conformity declarations from your vendors?
  • Are human checkpoints in place for all high-risk decisions?
  • Are logging and monitoring operational?
  • Have affected employees been informed about AI use in processes that concern them?
  • Have HR employees been trained and assessed on AI literacy (with evidence)?
  • Is there an internal compliance dossier with all of the above?

Conclusion: five months is enough, but only if you start now

The August deadline is the moment when abstract legislation becomes concrete for many HR departments. Systems you use daily require demonstrable governance, training, and transparency.

Organisations that start now will meet the deadline. Organisations that wait for more clarity risk being too late — with all the consequences that entails.

Qrio helps HR and compliance teams with the AI literacy component: training, assessment, and reporting ready for your audit dossier. This is how you systematically address one of the seven obligations.

Ready to start with AI literacy?

Discover how Qrio helps your organisation use AI safely and effectively.

View our plans

We use cookies

To improve your experience and track anonymous statistics. View our privacy policy for more info.