- H.A.I.R - AI in HR
- Posts
- Why HR Needs a 'Head of Preparedness' in 2026
Why HR Needs a 'Head of Preparedness' in 2026
If you are an HR leader entering 2026 with a "wait and see" approach to AI governance, you are already behind.

Hello H.A.I.R. Community,
Hello to the H.A.I.R community and welcome to 2026.
As we look at the year ahead, the primary challenge for HR is no longer about "adopting" AI. We have adopted it. The challenge for this year is governing a workforce where human and machine capabilities are increasingly indistinguishable.
Last year, the conversation shifted. The turning point wasn't a new model release, but a change in safety architecture. In April 2025, OpenAI updated its Preparedness Framework, and, at the end of the year, they began hiring for a new kind of leader: the Head of Preparedness.
While these developments occurred in the technical domain, they set the standard for how we must manage people and systems this year. The era of reactive policy - writing a rule after something breaks - is over. 2026 requires anticipatory governance.
Here are four principles from that framework that should define your HR strategy for the next twelve months.
1. Filter for 'Net New' Risk
In 2025, many HR teams exhausted themselves trying to police every interaction with generative AI. This year, we need a better filter.
OpenAI’s framework introduced a critical distinction: they only track risks that are "Net New" and "Irremediable."
This is the standard we should apply to workforce governance in 2026.
Net New: If an employee uses an AI agent to draft a poor email, that is not a new risk; we have managed poor communication for decades. However, if an autonomous agent is filtering CVs based on inferred behavioural data without oversight, that is a Net New risk.
Irremediable: Can the error be fixed? A bad draft is fixable. A bias encoded into a layoff algorithm causes permanent damage.
The 2026 Priority: Stop trying to govern everything. Focus your resources exclusively on the risks that existing performance management cannot catch.
2. Capabilities vs. Safeguards
We need to change how we buy and deploy technology. For the last few years, the focus has been on Capabilities: what the tools can do. "It can predict attrition," or "It can automate sourcing."
The new standard is Safeguards. The updated framework makes a clear separation between a "Capabilities Report" and a "Safeguards Report." It asserts that high capability without proven defence is a liability, not an asset.
The 2026 Priority: When you evaluate a new tool this year, do not ask for a feature list. Ask for the Safeguards Report. Ask for the "stress test" that proves the system will not hallucinate policy or discriminate against protected groups. If the vendor cannot prove that the defence works, do not deploy the capability.
3. The 'Sandbagging' Reality
One of the emerging risks identified in the framework is "Sandbagging". This is where a system intentionally underperforms to hide its true capabilities.
We are seeing a parallel in the workforce. As AI agents handle more execution work, the correlation between time spent and value created has broken. This creates a visibility crisis. We risk "Human Sandbagging," where productivity gains are hidden, or "Skill Atrophy," where reliance on tools masks a decline in critical thinking or removing mundane work creates burnout.
The 2026 Priority: We need to audit our performance metrics. If you are still measuring success by hours logged or volume of output, you are measuring a metric that AI has rendered meaningless. This year, shift assessment toward judgment, oversight, and the ability to direct complex systems.
4. Governance as Engineering, Not Documentation
The role of "Head of Preparedness" is not a policy-writing role; it is a technical one. The mandate is to build "scalable safety pipelines."
In HR, we still rely heavily on documents - handbooks and codes of conduct - to govern behaviour. These rely on voluntary compliance. But in a world of autonomous agents, "trust but verify" is too slow.
The 2026 Priority: Move toward Governance-by-Design.
Don't just write a policy on work-life balance; configure your systems to queue non-urgent messages after hours.
Don't just write a policy on inclusive language; deploy local checks that flag bias in real-time before a message is sent. Make the safe way of working the path of least resistance.
The Year of Preparedness
The defining characteristic of the "Head of Preparedness" role is the ability to make "high-stakes technical judgments under uncertainty."
That is the job description for every CHRO in 2026. We cannot wait for perfect regulatory clarity or total certainty - especially with so much regulatory change happening. We must assess the probability of risk, build the best defences we can, and move forward.
This year, let’s stop acting as administrators of the past and start acting as architects of the future.
Your First Step for 2026: Review your top three strategic risks for Q1. Ask yourself:
Do we have a "Safeguards Report" for these?
Do we know exactly what defences stand between us and a critical failure? If not, that is where work begins.
Listen in: Does LinkedIn Hate Women, with The Chad & Cheese Podcast

Chad and Joel were kind enough to welcome me on to their podcast to discuss some of my recent work with a team of people around shedding light on experiments relating to plausible gender proxy bias on LinkedIn. Listen in here.
Until next time,
H.A.I.R. (AI in HR)
Putting the AI in HR. Safely.
Here's how H.A.I.R. can help you put the AI in HR:
H.A.I.R. Newsletter: get authoritative, pragmatic, and highly valuable insights on AI in HR directly to your inbox. Subscribe now.
AI Governance QuickScore Assessment: understand your organisation's AI governance maturity in minutes and identify key areas for improvement. Take your QuickScore here.
Advisory Services: implement robust AI Governance, Risk, and Compliance (GRC) with programmes designed for HR and Talent Acquisition leaders. Contact us for a consultation.
Measure Your Team's AI Readiness with genAssess: stop guessing and start measuring your team's practical AI application skills. Discover genAssess.
Reply