- H.A.I.R - AI in HR
 - Posts
 - Algorithmic Suppression: Is LinkedIn's Algorithm Biased Against Women? A Legal and Technical Analysis
 
Algorithmic Suppression: Is LinkedIn's Algorithm Biased Against Women? A Legal and Technical Analysis
A recent experiment on LinkedIn showed men getting more reach than women on identical posts. I was asked to look into it, and found that bias is highly plausable.

Hello H.A.I.R. Community,
A few weeks ago, a simple experiment on LinkedIn sparked a critical conversation.
It began with a post by Dorothy Dalton , building on an experiment by Jane Evans and Cindy Gallop. They tested what would happen if men and women posted identical content at the same time. The results were jarring: in one test, two male participants with a combined following of ~9,400 saw their posts achieve significantly more reach than two female participants with a combined following of over 154,000.
This raised a question that goes far beyond just "culture or code." Is the platform, algorithmically suppressing women's voices?
The answer is that the claim is highly plausible. The cause is likely not direct, intentional discrimination, but a more insidious problem: proxy bias. And this isn't just a technical flaw; it's a significant liability risk under new EU regulations and existing UK law.
The "How": From "Gender Bias" to "Proxy Bias"
The algorithm isn't coded to IF (gender == 'female') THEN (demote_post). That's not how this works.
Instead, the algorithm is coded to IF (content == 'high-quality professional') THEN (promote_post).
The problem is how the machine learned to define "high-quality professional." It learned from historical data, and that data reflects a world of existing, systemic, and often unconscious biases. The algorithm has, in effect, learned a narrow, historically male-centric model of what "professional" looks like.
This is proxy bias: the algorithm isn't penalizing the gender; it's penalizing neutral characteristics that are correlated with gender.
My research identified three likely proxies for this bias:
Topic Bias: The algorithm may be trained to favor "hard" business topics (e.g., tech, finance, sales) over "soft" topics (e.g., Diversity, Equity, and Inclusion, workplace culture, harassment, or burnout) that, while critical, are more frequently discussed by women.
Language Bias: Research shows that professional language is heavily gendered. Men are often described with "agentic" words ("driven," "strategic," "leader"), while women are described with "communal" words ("collaborative," "supportive," "helpful"). If the algorithm has learned that "agentic" language equals "authority," it will systematically down-rank content that uses a "communal" style.
Data Bias: The algorithm may penalize career patterns that don't fit a traditional, linear model. This includes career breaks, which are taken far more frequently by women, often for caregiving.
The outcome is discriminatory, even if the intent is neutral.
The Law: Where "Bias" Becomes a Legal Liability
The original conversation correctly identified that this has legal implications. Our research found that the legal frameworks in the UK and EU are robust and applicable.
1. The UK's Equality Act 2010
One commenter, Susannah Walker, was spot on. The UK Equality Act 2010 is immediately applicable. The law is "technology-neutral"—it doesn't care if a human or an algorithm made the decision.
The relevant concept is "indirect discrimination." This is when a seemingly neutral policy (the algorithm's ranking logic) is applied to everyone but puts a group with a protected characteristic (sex) at a particular disadvantage.
A claimant wouldn't need to prove how the algorithm works, only that it produces a discriminatory outcome. The burden of proof would then shift to the platform to prove its algorithm is a "proportionate means of achieving a legitimate aim."
2. The EU's Dual Approach: The AI Act vs. The Digital Services Act
This is the most critical finding. Dorothy's question about the EU AI Act was exactly right in spirit, but it led to a more nuanced discovery. The EU is tackling this from two different angles.
The EU AI Act (For Hiring): The AI Act applies to "high-risk" systems. As Dorothy suspected, this absolutely covers LinkedIn's hiring tools—the "ATS" functions like LinkedIn Recruiter that sort, rank, and recommend candidates. For these tools, LinkedIn must prove its data is non-discriminatory and has robust human oversight.
The Digital Services Act (DSA) (For the Content Feed): This is the actual law that governs the newsfeed. The "suppression" problem falls directly under the DSA. LinkedIn is a "Very Large Online Platform" (VLOP) and is legally required to assess and mitigate "systemic risks to fundamental rights." Gender discrimination is a textbook example of such a risk.
This means LinkedIn is facing legal requirements for fairness on two fronts: the AI Act for its paid recruiting products and the DSA for its public content feed.
The "Smoking Gun": A Problem of Priority, Not Capability
This may not even be an unsolved problem for LinkedIn. The most powerful finding from our research is the stark contrast in the company's own actions.
For its Recruiter Tool (the "AI Act" problem): Our research found LinkedIn has published detailed, peer-reviewed, academic-level papers on how it measures and fixes gender bias. It has developed "fairness-aware re-ranking" frameworks. This proves it has the technical capability to solve this.
For its Content Feed (the "DSA" problem): In contrast, LinkedIn's public statements about the feed (like its "Mythbusting" series) are high-level, non-technical, and offer no such evidence of a fairness framework being applied.
This disparity is the story. The issue doesn't appear to be a lack of capability on LinkedIn's part—they are an industry leader in this. It appears to be a lack of priority and transparency in applying those same fairness principles to the public content feed.
From Conversation to Accountability
The anecdotes that started this conversation are not just "feelings"; they are plausible indicators of a real, systemic problem.
The original question was "culture or code." The answer, it seems, is that the code has learned our biased culture.
Now, new laws like the DSA in the EU and established laws like the UK's Equality Act provide the legal power to demand better. This algorithmic suppression is no longer just a community concern. It is a core compliance issue.
The question for LinkedIn is no longer if they can address this, but when they will—and whether regulators will use their new powers to make them.
Are you an HR or TA leader based in the Nordics? Have we got the webinar for you!
The pressure to adopt AI in HR is immense, but how do you separate real innovation from high-risk hype?
Many Nordic organisations are buying tools without asking the right questions, mistakenly thinking compliance is solely the vendor's problem. With the EU AI Act on the horizon, this approach is no longer defensible.
Join Alexandra M. Davis and I for this pragmatic webinar that cuts through the noise to give HR and TA leaders a playbook for safe AI adoption.
As you know, my core focus is on AI governance, risk and compliance. It’s the single biggest hurdle for most HR and TA leaders. That's why I'm speaking on an upcoming JobSync Roundtable: AI & Compliance for 2026. I'll be joining Bennett Sung and Tyler Lawrence this week on October 29th at 1pm EDT / 5pm GMT to provide a clear, practical roadmap for navigating the complex web of emerging hiring laws, bias audits and vendor management. If you are responsible for making your organisation's hiring strategy not just effective but also fair and legally defensible, this is an essential session.
The Crisis of Authenticity in Hiring
We're facing a genuine crisis of authenticity in hiring. Candidates are using AI to fake their way through interviews. Companies are posting ghost jobs they never intend to fill. And TA teams are trying to find real talent inside a flood of AI-generated applications.
When AI can write the CV and also screen it, whats real anymore?
I'm sitting down for a Recruiting real talk with Steve Bartel (CEO of Gem ) and Melissa Grabiner (Job Search Coach) to tackle this breakdown of trust. We'll be discussing real fraud stories, how employer practices are damaging the market and what hiring looks like when my AI talks to your AI.
Join us on November 5th at 1 p.m. ET / 6 p.m. GMT for what will be a practical conversation, not a presentation.
Here's how H.A.I.R. can help you put the AI in HR:
H.A.I.R. Newsletter: get authoritative, pragmatic, and highly valuable insights on AI in HR directly to your inbox. Subscribe now.
AI Governance QuickScore Assessment: understand your organisation's AI governance maturity in minutes and identify key areas for improvement. Take your QuickScore here.
Advisory Services: implement robust AI Governance, Risk, and Compliance (GRC) with programmes designed for HR and Talent Acquisition leaders. Contact us for a consultation.
Measure Your Team's AI Readiness with genAssess: stop guessing and start measuring your team's practical AI application skills. Discover genAssess.
Until next time,
H.A.I.R. (AI in HR)
Putting the AI in HR. Safely.

Reply