- H.A.I.R - AI in HR
- Posts
- The Trust Deficit: New UK Data Reveals the Real Barrier to Your AI Strategy
The Trust Deficit: New UK Data Reveals the Real Barrier to Your AI Strategy
While new data reveals a deep public mistrust of AI, a sweeping set of AI hiring regulations just came into force that HR leaders cannot afford to ignore.

Hello H.A.I.R Community,
HR and Talent leaders are under immense pressure to integrate AI into their organisations. The promise of efficiency and enhanced capability is compelling. Yet a significant, often overlooked, obstacle stands in our way. It isn't budget, technology or even skills. It is trust.
A new in-depth paper from the Tony Blair Institute and Ipsos, surveying over 3,700 UK adults, lays this bare. The findings confirm what many of us have suspected: there is a profound gap between the government's and vendors' opportunity-focused agenda and the cautious reality of public opinion.
For leaders responsible for the human side of transformation, this data is a critical warning.
The Foundation is Fragile: Mistrust is the Default
The report's headline statistic is a stark one. When asked about barriers to using generative AI, the single biggest obstacle cited by the UK public is a lack of trust in the content AI produces (38%). This is followed closely by concerns over privacy (32%) and ethics (28%). This is not a vague feeling of unease. It is a specific and widespread scepticism that has real consequences.
This apprehension is reflected in how people view AI's broader impact. UK adults are almost twice as likely to view AI as a risk for the economy (39%) as an opportunity (20%). Perceptions about AI in public services are similarly risk-oriented. With nearly half the public reporting they have not used any generative AI tools in the past year, the default stance is one of caution, not enthusiasm.
So what does this mean for your workplace?
This national sentiment does not stop at the office door. Your employees are a reflection of this public landscape. If they walk in with a baseline of mistrust, any AI tool you introduce from a new HRIS feature to a generative AI assistant is already at a disadvantage. Without a foundation of trust, adoption will stall, engagement will be superficial, and the return on your investment will never materialise. You cannot build a successful AI strategy on a foundation of sand.
From Risk to Readiness: A Practical Path Forward
The report is not all bad news. It provides a clear, pragmatic roadmap for building the trust required for successful AI adoption. The data shows that familiarity, confidence and governance are powerful antidotes to fear. Here is how you can apply these insights.
1. Govern Before You Generate
The public is clear: they expect robust governance and oversight from a trusted authority. Research highlighted in the report shows a clear public desire for specific safeguards, including:
Explanations on how AI decisions are made.
Monitoring to check for discrimination.
Meaningful human involvement.
For HR leaders, this is our core remit. Building an AI Governance Framework is not about creating red tape. It is about creating the guardrails that make people feel safe. It is about demonstrating that you have considered the risks of bias, privacy and fairness before a single employee uses a new tool. This proactive governance is the most powerful trust-building exercise you can undertake.
Governance in Action: California's New AI Hiring Rules Are Here
This call for governance is no longer just a finding in a survey. It is now concrete law, and new regulations in California provide a crystal ball for what HR and Talent leaders across the UK and elsewhere can expect.
On 1 October 2025, sweeping new amendments to California’s Fair Employment and Housing Act (FEHA) take effect, creating one of the most comprehensive efforts to regulate AI in the workplace. The rules define an Automated-Decision System (ADS) so broadly that it covers almost every piece of modern HR technology, including resume screeners, video interview analytics, and personality assessments.
Crucially, the new rules don't just prohibit discrimination; they shift the burden of proof. Employers may now need to demonstrate their "proactive efforts to avoid unlawful discrimination". This means conducting and documenting anti-bias testing becomes a potential legal defence.
The regulations also introduce two other critical changes:
Vendor Liability: For the first time, vendors providing recruitment and screening tools can also face exposure for violations, meaning your choice of technology partner is more critical than ever.
Recordkeeping: Employers must now retain all ADS data including inputs, outputs and settings for four years.
Why does this matter to us? What happens in California sets the global standard for technology regulation. These rules signal a definitive move away from self-policing towards mandatory accountability. For UK leaders, this is the time to get ahead. The questions these regulations raise—about bias auditing, vendor due diligence, and data retention—are the exact questions we should be asking of our own processes right now.
2. Bridge the AI Confidence Gap
There is a direct correlation between an employee's confidence in their AI skills and their optimism about the technology. The report found that 66% of people confident in their AI skills expect AI to help with their job, viewing it as a supportive tool. In stark contrast, only 45% of people with lower confidence feel the same way. Confident workers see a collaborator, not a competitor; only 1% of this group believes AI will eliminate their role entirely in the next 12 months.
This confidence is not evenly distributed. The report identifies a significant "confidence gap" in some highly exposed sectors like health, social care and education, where readiness for AI is low despite the high potential for impact.
Training is therefore not just about technical upskilling. It is about building trust. By investing in AI literacy programmes co-developed with employees, you are not just teaching prompts you are turning anxiety into agency.
3. Context is Everything: Start with Supportive Use Cases
Public acceptance of AI is not universal it is highly dependent on the use case. The data shows a massive 56-point swing between the highest and lowest acceptance rates depending on the application. This principle is critical for the workplace.
The survey data illustrates this perfectly:
40% of UK adults are comfortable with using AI to personalise training programmes to support employees.
An overwhelming 57% are uncomfortable with using AI to monitor employee performance.
The lesson is clear. To build trust, begin with applications that demonstrably support and develop your people. Use AI to augment their skills, not to scrutinise their work. Early experiences with AI are decisive in shaping long-term attitudes. A positive first impression with a helpful, low-risk tool can build the confidence needed for more ambitious projects later.
Ultimately, this report reinforces the core mission of H.A.I.R. To successfully put the 'AI' in HR, we must first focus on the 'H'. The primary challenge is not technical; it is human. By leading with governance, building confidence through skills, and choosing supportive applications, you can build an AI strategy that is not only effective but also trusted.
Is your recruitment team using AI in a compliant and responsible way? Many teams are adopting tools without guidance, creating risks such as bias, as well 'black box' decisions.

On Tuesday 8th October (12:30-1:30pm BST), I'll be joining Scede for an exclusive webinar titled 'Responsible AI in Recruitment: AI Compliance for Talent Acquisition Teams', to help you understand the importance of compliance in AI🚀
Hosted by my good friend, Joe Atkinson, Founder of PURPL (part of the Scede family), we'll be talking through:
How to implement governance as a 'guardrail' for innovation VS a roadblock (Remember: The EU AI act is coming, and it's not just for Europe!)
Why you need to address and deal with the 'shadow use' of AI in your organisation
How you can ensure transparency in regards to your AI tooling and avoid costly compliance risks
Here's how H.A.I.R. can help you put the AI in HR:
H.A.I.R. Newsletter: get authoritative, pragmatic, and highly valuable insights on AI in HR directly to your inbox. Subscribe now.
AI Governance QuickScore Assessment: understand your organisation's AI governance maturity in minutes and identify key areas for improvement. Take your QuickScore here.
Advisory Services: implement robust AI Governance, Risk, and Compliance (GRC) with programmes designed for HR and Talent Acquisition leaders. Contact us for a consultation.
Measure Your Team's AI Readiness with genAssess: stop guessing and start measuring your team's practical AI application skills. Discover genAssess.
Thank you for being part of H.A.I.R. I hope this deep dive helps you navigate the complexities of AI in HR with greater confidence and control.
Until next time,
H.A.I.R. (AI in HR)
Putting the AI in HR. Safely.
Reply