- H.A.I.R - AI in HR
- Posts
- The New UK Data Act: A Guide for HR Leaders
The New UK Data Act: A Guide for HR Leaders
The Data Use and Access Act is here. I break down what it really means for HR, from AI governance to your daily data handling.

Hello,
You have likely seen the headlines about the new Data Use and Access Act 2025 (DUAA). When new data laws appear, the immediate reaction is often to brace for a new wave of compliance burdens.
However, the view from the Information Commissioner's Office (ICO) is clear: the DUAA is designed to be an opportunity. It amends, but does not replace, the UK GDPR and DPA 2018. Its goal is to promote innovation and make things easier for organisations, while still protecting people's rights.
So, let's look at this pragmatically. How can you, as an HR leader, leverage these changes? And what are the few new requirements you genuinely need to act on?
From a Restricted Model to an Open Framework
Previously, making a significant decision about someone based solely on automated processing was highly restricted. You could only do it if it was necessary for a contract, authorised by law, or if you had the person's explicit consent.
The DUAA removes these specific restrictions. This is a significant shift. It means you can now potentially make solely automated decisions in a wider range of situations, relying on other lawful bases like 'legitimate interests'. However, this new flexibility comes with a very important trade-off: You must have appropriate safeguards in place.
The Two Questions Every HR Leader Must Ask
The entire framework hinges on two key definitions from the Act:
Is it a 'significant decision'? A decision is considered significant if it has a
legal effect, or a similarly significant effect, on the person. In the world of HR, this covers most of our critical work:
The decision to shortlist a candidate for an interview (or not).
Placing an employee on a formal performance improvement plan.
Decisions affecting pay, promotion, or contract termination.
Is the decision based 'solely on automated processing'? A decision is based solely on automated processing if there is
no meaningful human involvement in making it.
This concept of "meaningful human involvement" is the absolute crux of the new law. The ICO's guidance implies this is not a low bar. It is not a junior recruiter glancing at a list of 500 rejections generated by an AI and simply clicking 'approve'. Meaningful involvement requires a person with the appropriate authority and training to properly consider the automated recommendation and to be able to override it.
Your Legal Safeguards: The Non-Negotiables
If your process makes a significant decision based solely on automated processing, you are now legally required to have very specific safeguards in place. You must provide a simple way for the individual to:
Be informed that a decision was made about them via an automated process.
Make representations about the decision (i.e., give their side of the story).
Obtain human intervention to have the decision reviewed.
Contest the decision itself.
This is your new compliance checklist for any AI-driven decision-making in your HR lifecycle.
A Critical Warning on Special Category Data
It is vital to note that the old, stricter rules still apply to special categories of personal information (e.g., health data, racial or ethnic origin). The DUAA keeps the restriction on using this type of information in automated decision-making. You can only do so with explicit consent or where it's necessary for reasons of substantial public interest laid down in law.
A Pragmatic Action Plan for Recruitment Leaders
My standard thoughts with anything like this is that regulation like this is not here to slow down innovation, but to allow us to go faster, safely. To do that, here’s a check list for you to follow:
Audit Your Recruitment Tech Stack: Map every tool you use that involves automation, from sourcing to screening. For each one, ask: is it making a 'significant decision'? And is there 'meaningful human involvement'?
Question Your Vendors Relentlessly: Do not take marketing claims about "AI-powered fairness" at face value. Ask them directly: How does your system ensure meaningful human involvement is possible? What features are built in to help us meet our obligation to inform candidates and allow them to contest decisions?
Update Your Policies and Privacy Notices: Your privacy notices for candidates must be updated to reflect this new framework and transparently explain their rights regarding automated decisions.
Train Your Hiring Managers and Recruiters: Your people are your most important guardrail. They need to be trained on what meaningful human involvement looks like in practice. They must feel empowered to question and override an AI's recommendation.
The direction from the regulator is clear. The law is catching up with the technology, and the focus is squarely on ensuring that AI is used in a way that is fair, transparent, and ultimately, human-centric.
A First for H.A.I.R.: Public AI Masterclasses
Something I don't normally do...
My three-hour AI workshops are usually reserved for private corporate teams. But after continuous requests, I'm opening up my calendar for a limited number of public sessions this August for the very first time.
These aren't one-hour overview webinars. They are comprehensive, capability-building sessions designed for individual HR and Talent Acquisition professionals. To ensure a high-quality, interactive experience, seats are strictly limited to just 20 per workshop.
Choose the track that's right for you:
Track 1: For Recruiters & TA Professionals: The AI-Powered Recruiter Workshop This is a practical deep dive into the "how". We'll move beyond basic prompting to build the skills you need to work faster, smarter, and more strategically. You will leave having mastered the PRIME framework in a hands-on session. (Dates: 5th & 26th August)
Track 2: For HR Directors & People Leaders: The AI Readiness Workshop This is a strategic session focused on the "why" and "what". We'll cover your role as the "Ethical Guardian", build "Guardrails" for your organisation, and develop a responsible AI roadmap. (Dates: 7th & 28th August)
If you're ready to move beyond the hype and build real, practical AI skills, this is your chance. Places are offered on an approval basis to ensure the right mix of professionals.
If this raises questions about your own AI governance, that is a good thing. It is the first step towards building a responsible and defensible recruitment strategy.
All the best,
H.A.I.R. (AI in HR)
Putting the AI in HR. Safely.
Reply