
AI tools you ask a question — everyone knows those by now. But the next phase has already begun: AI agents that act autonomously. They look up information, send emails, schedule meetings, call APIs, and execute processes — without a human confirming every step.
For HR and compliance professionals, this is a new challenge: who is responsible when an agent makes a mistake? What is an agent allowed to do with personal data? And what does the EU AI Act expect of you?
What exactly is an AI agent?
An AI agent is a system that receives a goal and autonomously takes steps to achieve it. The difference from a regular chatbot: the agent acts, not just answers.
- A recruitment agent screens CVs, sends rejections, and schedules interviews.
- A finance agent collects invoices, checks for discrepancies, and sends reminders.
- A customer service agent resolves complaints, adjusts orders, and escalates complex cases.
Why this is urgent now
Until recently, AI agents were a lab concept. That's over. Platforms like Microsoft Copilot, Salesforce Agentforce, and dozens of SaaS tools already offer agents as standard. Employees sometimes activate them without IT or compliance knowing.
- Gartner predicts that by 2028, more than 33% of business software will contain agents.
- Research by Salesforce shows that 82% of IT leaders are already running pilot agents or actively experimenting with them.
- The EU AI Act (effective from 2026) explicitly places responsibility on organisations that deploy AI systems making autonomous decisions.
The three biggest risks of unmanaged agents
Without governance on agents, you face three concrete risks.
- Data risk: agents often have broad access rights. An agent permitted to read emails reads everything — including confidential HR files or financial data.
- Liability risk: if an agent makes an incorrect decision (rejecting an applicant based on bias, sending a customer wrong information), who is responsible? Without logging and human oversight, that cannot be proven.
- Compliance risk: the EU AI Act classifies some agents as high-risk systems. Think of agents involved in recruitment, assessment, or credit decisions. For these systems, strict requirements on transparency, human oversight, and documentation apply.
What the EU AI Act says about autonomous systems
Article 9 of the EU AI Act requires providers and deployers of high-risk AI systems to have a robust risk management system. For agents involved in HR processes, credit assessment, or essential services, this concretely means:
- Human oversight: there must always be a human who can intervene or reverse a decision.
- Logging: every action by the agent must be traceable.
- Transparency: those affected (such as applicants) have the right to know that an AI system plays a role in decisions about them.
- Accuracy and robustness: the system must demonstrably function correctly and minimise errors.
Practical: how to arrange agentic AI responsibly
You don't need to ban agents. But you do need to manage them. Here are five concrete steps.
- 1. Inventory which agents are already active. Ask IT for an overview of activated AI features in existing tools (Microsoft, Salesforce, HubSpot, etc.).
- 2. Classify per agent: what does the system do and which data does it access? High-risk applications (HR, financial, legal) need extra attention.
- 3. Define access rights. Give agents minimal rights: only access to what is needed for the specific task.
- 4. Build human checkpoints. Set thresholds: which decisions may an agent make independently and which must be approved by a human?
- 5. Document everything. Record which agents you use, for what purpose, with which rights, and how oversight is arranged. This is your evidence for regulators.
The role of AI literacy in agentic AI
Employees who deploy agents without training are the biggest risk. They give agents overly broad instructions, connect wrong data sources, or blindly trust the output without checking.
Training must specifically address agents: how do they work, what can they get wrong, and when should you manually intervene?
- Understand the instructions (prompts) you give an agent: vague = unpredictable results.
- Always check critical outputs: an agent that drafts a contract needs a human final review.
- Know when to escalate: if an agent behaves unexpectedly, stop the process and report it to IT or compliance.
Conclusion: agent governance is not a luxury, but a requirement
Agentic AI is powerful and productive — but only if you have the prerequisites in order. That means: knowing which agents you use, understanding what they can do, limiting access rights, and ensuring human oversight.
The EU AI Act makes this non-optional for high-risk applications. And even outside that category: an agent that makes a mistake is your responsibility, not the vendor's.
Qrio helps your teams understand how AI agents work, what risks they bring, and how to set up governance aligned with the EU AI Act. So you benefit from the speed of agents without losing control.