A New Home for H.A.I.R

A new home, new training, and the hard truth about AI transcription

Welcome to the very first edition of the H.A.I.R. newsletter on our new home, Beehiiv.

Some of you may know that we've moved from our previous community on nas.io. I've made this switch to bring you a more robust, focused, and valuable experience directly in your inbox. Beehiiv will allow me to deliver more of what you need - practical, no-nonsense advice for navigating the complex world of AI in HR - without the noise.

So, what isn't changing? The mission. H.A.I.R. will continue to be your guide to putting AI into HR safely, fairly, and legally. We'll keep cutting through the hype to focus on what really matters: empowering the human side of this transformation.

To kick things off, I have something special.

A First for H.A.I.R.: Public AI Masterclasses

Something I don't normally do...

My three-hour AI workshops are usually reserved for private corporate teams. But after continuous requests, I'm opening up my calendar for a limited number of public sessions this August for the very first time.

These aren't one-hour overview webinars. They are comprehensive, capability-building sessions designed for individual HR and Talent Acquisition professionals. To ensure a high-quality, interactive experience, seats are strictly limited to just 20 per workshop.

Choose the track that's right for you:

Track 1: For Recruiters & TA Professionals: The AI-Powered Recruiter Workshop This is a practical deep dive into the "how". We'll move beyond basic prompting to build the skills you need to work faster, smarter, and more strategically. You will leave having mastered the PRIME framework in a hands-on session. (Dates: 5th & 26th August)

Track 2: For HR Directors & People Leaders: The AI Readiness Workshop This is a strategic session focused on the "why" and "what". We'll cover your role as the "Ethical Guardian", build "Guardrails" for your organisation, and develop a responsible AI roadmap. (Dates: 7th & 28th August)

If you're ready to move beyond the hype and build real, practical AI skills, this is your chance. Places are offered on an approval basis to ensure the right mix of professionals.

AI Governance & Risk: The Hidden Dangers in Your Interview Notes

Let's talk about interview transcription tools.

On the surface, they're brilliant. They promise to free up your team from hours of note-taking, creating perfect, searchable records of every candidate conversation. But as with any powerful AI tool, what you see isn't always what you get. The convenience can mask significant risks to fairness, compliance, and even your organisation's reputation.

The core of the problem is something called AI Hallucination. This is where the model doesn't just mishear a word; it fabricates entire phrases or sentences that were never said.

Think of it as an over-eager assistant who, instead of writing "I don't know", decides to invent an answer to make their notes look complete. The consequences, however, are far more serious.

Risk 1: When Efficiency Creates Defamation

A staggering 38% of these fabrications included explicit harms.

Imagine an innocuous conversation being transcribed with false, shocking phrases like "blood-soaked stroller" or "terror knife". In a hiring context, this "algorithmic malpractice" could pollute a candidate's file with false, defamatory information, leading to an unfair and legally indefensible rejection.

Risk 2: A System Biased by Silence

The same research revealed another critical issue: the tool was more likely to hallucinate when a person spoke with longer pauses. This meant that speakers with aphasia, a speech impairment, had a significantly higher hallucination rate than the control group.

The implications are clear. Such a tool could systematically disadvantage candidates with certain speech patterns, including those with disabilities, non-native English speakers, or even just individuals who pause to think. This isn't just unfair, it could be a direct violation of disability discrimination laws.

Risk 3: Not All AI is Created Equal

Perhaps the most crucial takeaway is that this is not a universal AI problem. The study found no comparable hallucinations in competing systems from Google, Amazon, or Microsoft during their tests.

This highlights the ultimate GRC blind spot: accepting a vendor's marketing claims at face value. Without asking the right questions, you are flying blind.  In fact, there are now public leaderboards that track and compare the hallucination rates of various large language models, making it easier to see which ones are more factually consistent. You can explore one of the leading ones, the Vectara Hallucination Leaderboard, here.

How to Protect Your Organisation: A 3-Step Mitigation Framework

This isn't about abandoning the technology. It's about implementing the right guardrails.

  1. Mandate a "Human-in-the-Loop". The golden rule. Never, ever use a fully automated transcript or summary for a high-stakes decision. A human must review and verify the output before it's used to evaluate a candidate.

  2. Scrutinise Your Vendor. Go beyond the sales pitch. Ask vendors which specific AI models they use. Demand to see independent audit data on their performance, particularly concerning fairness and hallucination rates. If they can't provide it, that's a major red flag.

  3. Establish Candidate Review Rights. Foster transparency and create a simple process for candidates to review their transcribed data and request corrections. This not only builds trust but also serves as a powerful, real-time error-checking mechanism.

Automated tools offer a compelling future, but only if we manage them responsibly. By embedding this kind of proactive governance into your process, you can ensure that your use of AI is fair, compliant, and genuinely effective.

That's all for this first edition. I'm excited to continue this journey with you from our new home. As always, feel free to reply to this email with your thoughts or questions.

All the best,

H.A.I.R. (AI in HR) 

Putting the AI in HR. Safely.

Reply

or to participate.