- H.A.I.R - AI in HR
- Posts
- AI, HR and governance in 2026: what has changed and what HR leaders must do next
AI, HR and governance in 2026: what has changed and what HR leaders must do next
Are organisations prepared to govern AI in a world where regulation, politics and workforce expectations are moving in different directions at the same time.

Hello H.A.I.R. Community,
Welcome to this final 2025 edition of the H.A.I.R newsletter
As we move into 2026, the challenge facing HR leaders is no longer whether AI should be used at work. That question has been settled. The real issue is whether organisations are prepared to govern AI in a world where regulation, politics and workforce expectations are moving in different directions at the same time.
What many HR teams are experiencing now is not confusion, but contradiction. The same system, used in the same way, can be seen as responsible practice in one country and unacceptable risk in another. This is becoming normal, not exceptional.
This edition focuses on what that means for HR leaders in practice, not in theory.
One HR technology stack, multiple regulatory expectations
The idea of a single global approach to “ethical AI” is fading fast. Instead, HR leaders are being asked to manage overlapping and sometimes conflicting expectations.
In parts of Europe, the emphasis is on privacy, explainability and demonstrable control of bias
In the UK, the focus is increasingly on fairness at work, worker protections and enforceability
In the US, state and city requirements may point in one direction while federal posture points in another
In several growth markets, data sovereignty, localisation and individual accountability are becoming more prominent
For HR, this means that governance can no longer be treated as a policy document that applies everywhere in the same way. It has to be operational, adaptable and locally defensible.
Why this is now a strategic HR issue
AI governance is no longer confined to compliance or risk teams. It directly affects three areas that sit squarely in the HR remit.
Operational resilience
If your HR systems cannot cope with different rules on data use, retention and transfer, they will become fragile. Manual workarounds and informal practices are now visible risk.
Talent acquisition and management
Recruitment, scheduling, performance assessment and dismissal are increasingly supported by automated or semi-automated systems. These decisions attract scrutiny because they affect livelihoods. The bar for justification is rising.
Trust with the workforce
Employees and candidates expect to understand how decisions are made and how they can be challenged. When AI is involved, opacity erodes confidence quickly, even if the system is technically compliant.
In short, AI governance is becoming part of how employees judge whether an organisation is fair and credible. AI governance will become a core part of your EVP.
What is shifting in key regions
Europe: making AI governance workable
European regulation is evolving to address a problem HR teams have faced for years: strong protections that are difficult to implement in real-world HR operations.
There is growing recognition that organisations need clearer pathways to:
analyse workforce data responsibly
test for bias using appropriate controls
build internal AI capability without relying on legally fragile consent models
For HR leaders, this does not remove responsibility. It increases it. Greater flexibility comes with an expectation of stronger internal discipline around data separation, access control and accountability.
United Kingdom: AI meets employment law
The UK is embedding expectations around AI use directly into employment rights and enforcement.
If technology influences decisions about pay, hours, workload, discipline or dismissal, it is increasingly treated as part of management practice rather than a neutral tool. This raises the standard HR teams are expected to meet.
Key implications include:
automated performance signals must be explainable
work scheduling systems must be defensible in terms of fairness and stability
human review must be real, not symbolic
If a decision could end up in front of a tribunal, the role of AI in that decision needs to be clear and justifiable.
United States: compliance depends on context
In the US, HR leaders are operating in a fragmented environment. Some jurisdictions expect evidence of bias mitigation. Others place emphasis on job-relatedness and business necessity.
The practical challenge here is not whether to audit systems, but how to document and explain what you are doing. The same technical work can be interpreted very differently depending on how it is framed.
For HR, alignment between legal, compliance and people teams on language and documentation is becoming as important as the controls themselves.
Beyond Europe and North America
In several countries, AI governance is developing in ways that do not mirror Western models.
Some are emphasising:
data localisation and national infrastructure
strict liability for harms caused by AI systems
personal accountability for senior leaders
For global HR teams, this changes decisions about system architecture, vendor choice and leadership responsibility.
What this means for HR operations
Recruitment and selection
AI-supported hiring is now widely accepted, but expectations are changing.
Organisations are expected to:
show that selection criteria are job-related
understand how tools affect different groups
provide explanations that make sense to candidates
offer credible routes to challenge or review decisions
Recruitment processes that rely on opaque scoring or unreviewable filters are becoming harder to defend.
Scheduling and workforce management
Workforce optimisation tools are increasingly being assessed not only on efficiency, but on fairness and predictability.
Where systems are designed purely for flexibility, they may conflict with emerging expectations around stable hours and transparent treatment. HR teams may need to revisit what success looks like in scheduling algorithms.
Performance management and monitoring
Employee monitoring, productivity scoring and behavioural analytics sit in a high-risk area.
Even where legal frameworks are still developing, workforce tolerance for intrusive or poorly explained monitoring is low. HR leaders should assume these practices will attract scrutiny from regulators, unions and employees alike.
A useful test is simple: would you be comfortable explaining this system to your strongest performers, not just justifying it against low performance?
What HR leaders should prioritise in the first 90 days of 2026
1) Know what you are using
Create and maintain a clear register of systems that influence people decisions. For each system, be able to answer:
what it does
what decisions it influences
what data it uses
who is accountable for it
how outcomes are monitored
how individuals can challenge decisions
If you cannot describe it, you cannot govern it.
2) Get serious about bias analysis
If you intend to assess fairness, design a secure and disciplined way to do so. This means:
clear purpose limitation
controlled access to sensitive attributes
separation between analysis and operational use
proper documentation and audit trails
Ad hoc analysis is no longer sufficient.
3) Treat human oversight as a capability
Human review only works if reviewers understand the system well enough to question it.
Training should cover:
what the system is designed to optimise
its limitations and blind spots
when to override outputs
how to record decisions consistently
how to respond to challenges
Without this, oversight is performative rather than meaningful.
4) Raise the bar in procurement
HR technology procurement now carries governance consequences.
Expect vendors to provide:
clear explanations of how decisions are supported
evidence of job-relatedness and validity
support for audits and documentation
flexibility to meet local data and regulatory requirements
Procurement decisions shape your risk profile long after implementation.
5) Be explicit about what you will not automate
Regulation may differ, but organisational values do not have to.
HR leaders should be clear about boundaries, such as:
decisions that will always involve human judgement
practices that are off-limits even if legally permissible
monitoring approaches that are considered unacceptable
Clarity here supports consistency and trust.
Questions to reflect on this quarter
Which HR decisions rely most heavily on automated signals today?
Could you explain those decisions clearly to an employee who disagrees with the outcome?
Do your systems reflect how you want to be perceived as an employer, not just what is technically allowed?
Listen in: Does LinkedIn Hate Women, with The Chad & Cheese Podcast

Chad and Joel were kind enough to welcome me on to their podcast to discuss some of my recent work with a team of people around shedding light on experiments relating to plausible gender proxy bias on LinkedIn. Listen in here.
Closing note
The defining feature of AI in HR is not speed or scale. It is impact. These systems shape careers, income and dignity at work.
The organisations that navigate 2026 well will not be those with the most advanced tools, but those with the strongest judgement about when and how to use them.
As always, thank you for reading H.A.I.R. Have a great New Year and 2026!
Until next time,
H.A.I.R. (AI in HR)
Putting the AI in HR. Safely.
Here's how H.A.I.R. can help you put the AI in HR:
H.A.I.R. Newsletter: get authoritative, pragmatic, and highly valuable insights on AI in HR directly to your inbox. Subscribe now.
AI Governance QuickScore Assessment: understand your organisation's AI governance maturity in minutes and identify key areas for improvement. Take your QuickScore here.
Advisory Services: implement robust AI Governance, Risk, and Compliance (GRC) with programmes designed for HR and Talent Acquisition leaders. Contact us for a consultation.
Measure Your Team's AI Readiness with genAssess: stop guessing and start measuring your team's practical AI application skills. Discover genAssess.
Reply