• H.A.I.R - AI in HR
  • Posts
  • The AI Transparency Trap: Why Disclosing AI Use Could Be Hurting Your Team

The AI Transparency Trap: Why Disclosing AI Use Could Be Hurting Your Team

We tell our people to be transparent about using AI. New research suggests this common-sense advice might be backfiring

Hello H.A.I.R. Community,

Most AI governance policies being written today share a common, sensible-sounding theme: transparency. The popular advice, from ethics committees to the press, is straightforward: if you use AI for your work, you should disclose it.

It feels right. It feels ethical.

Here's the problem: new research suggests this well-intentioned advice could be actively eroding the one thing we need to manage this transformation: trust.

A comprehensive new paper, "The transparency dilemma: How AI disclosure erodes trust", presents a stark challenge to our current thinking. Across 13 different experiments, researchers consistently found that individuals who disclose their AI usage are trusted less than those who do not.

This isn't a minor dip. The "trust penalty" was found across a huge range of professional tasks that HR leaders oversee every single day:

  • Writing job applications

  • Creating performance reviews

  • Conducting data analysis

  • Composing routine work emails

In each scenario, the actor who was transparent about using AI was perceived as less trustworthy. This creates a serious dilemma for leaders and organisations: we are pushing for transparency, but we may be penalising the very people who follow our guidance.

Why Does This Happen? It's Not Just 'Algorithm Aversion'

The researchers' first instinct was to check if this was just "algorithm aversion" the known bias people have against non-human decisions.

But the data shows it's something deeper. The paper argues the root cause is a reduction in perceived legitimacy.

In simple terms, we all have "taken-for-granted expectations" about how professional work gets done. We expect human effort, judgment and reasoning. When someone discloses they used AI, they are signalling that they may have "diminished or even replaced human agency".

This disclosure shatters the evaluator's assumptions. The work is suddenly perceived as less "proper, or appropriate", which in turn causes trust to erode.

Three Nuances Every HR Leader Must Understand

This is where the findings get incredibly practical for anyone writing an AI playbook.

1. Framing the Disclosure Doesn't Help This is the most critical finding. The researchers tested different ways of disclosing AI use. The trust penalty remained, even when the person added qualifying phrases like:

  • "A human has reviewed and revised the work"

  • "AI was used only for proofreading"

  • "This disclosure is in the spirit of transparency"

The takeaway: Your policy can't just be "it's fine if you disclose it." The very act of disclosure, regardless of framing, is what triggers the trust penalty.

2. Disclosure is Bad. Exposure is Worse. The study confirms what many employees instinctively fear. While self-disclosing AI use reduces trust, being exposed for using it by a third party is far more damaging. This puts employees in a "rock and a hard place" and creates a strong incentive to hide their AI use, making governance impossible.

3. The Human-AI Blend Creates Ambiguity In a fascinating twist, the study found that a human disclosing AI use was trusted less than an autonomous AI agent performing the same task. Why? The paper suggests that blending human and AI roles creates "role ambiguity". It's unclear who is responsible. This ambiguity makes us uneasy and further erodes trust.

Martyn's Take: How to Build a Defensible Policy

This research is a perfect example of why we must be pragmatic, not just idealistic, in our AI governance. These findings show that a simple, blanket "thou shalt disclose" policy is naive and likely to backfire.

So, what do we do?

  1. Don't Panic or Ban. This research does not mean we should encourage people to hide their AI use. The 'exposure' finding shows that's a much bigger risk.

  2. Focus on Accountability, Not Just Disclosure. The real issue here is the "ambiguity of responsibility". Our policies should reflect this. The focus must shift from what tool was used to who is accountable for the final output. A manager who uses AI to help draft a performance review is still 100% accountable for its fairness, accuracy and delivery. This aligns with the principles of human oversight in the EU AI Act.

  3. Be Specific. Where Does Disclosure Actually Matter? A one-size-fits-all policy is lazy. Does a developer need to disclose using AI to debug code internally? Probably not. Does a marketing team need to disclose that an image in a public advert is AI-generated? Almost certainly. HR's role is to help the organisation define these "red line" scenarios.

  4. Lead the Normalisation. The study suggests this trust penalty exists because AI use violates current social norms. The way to fix this is to change the norms. HR can lead this by normalising AI as a tool for productivity and augmentation. The paper itself notes that in environments where AI is seen as "collectively valid," the trust-related consequences are minimised.

Ultimately, this paper is a warning against performative transparency. A good AI policy isn't about forcing people to fill out a "did you use AI?" checkbox. It's about building a new set of expectations where human accountability is the primary focus, regardless of the tools used to get the work done.

Are you an HR or TA leader based in the Nordics? Have we got the webinar for you!

The pressure to adopt AI in HR is immense, but how do you separate real innovation from high-risk hype?

​Many Nordic organisations are buying tools without asking the right questions, mistakenly thinking compliance is solely the vendor's problem. With the EU AI Act on the horizon, this approach is no longer defensible.

​Join Alexandra M. Davis and I for this pragmatic webinar that cuts through the noise to give HR and TA leaders a playbook for safe AI adoption.

The Crisis of Authenticity in Hiring

We're facing a genuine crisis of authenticity in hiring. Candidates are using AI to fake their way through interviews. Companies are posting ghost jobs they never intend to fill. And TA teams are trying to find real talent inside a flood of AI-generated applications.

When AI can write the CV and also screen it, whats real anymore?

I'm sitting down for a Recruiting real talk with Steve Bartel (CEO of Gem ) and Melissa Grabiner (Job Search Coach) to tackle this breakdown of trust. We'll be discussing real fraud stories, how employer practices are damaging the market and what hiring looks like when my AI talks to your AI.

Join us on November 5th at 1 p.m. ET / 6 p.m. GMT for what will be a practical conversation, not a presentation.

Want to de-risk your AI strategy?

This kind of complex, socio-technical challenge is exactly what I help HR leaders navigate. If you're building your AI governance framework and want to avoid the hidden traps, let's talk.

You can learn more about my fractional AI Governance Advisory services here.

Until next time,

H.A.I.R. (AI in HR)

Putting the AI in HR. Safely.

Here's how H.A.I.R. can help you put the AI in HR:

  1. H.A.I.R. Newsletter: get authoritative, pragmatic, and highly valuable insights on AI in HR directly to your inbox. Subscribe now.

  2. AI Governance QuickScore Assessment: understand your organisation's AI governance maturity in minutes and identify key areas for improvement. Take your QuickScore here.

  3. Advisory Services: implement robust AI Governance, Risk, and Compliance (GRC) with programmes designed for HR and Talent Acquisition leaders. Contact us for a consultation.

  4. Measure Your Team's AI Readiness with genAssess: stop guessing and start measuring your team's practical AI application skills. Discover genAssess.

Reply

or to participate.