• H.A.I.R - AI in HR
  • Posts
  • The Harper v. Sirius XM Lawsuit: What the US Legal Case Means for UK and EU Organisations

The Harper v. Sirius XM Lawsuit: What the US Legal Case Means for UK and EU Organisations

This week, a new lawsuit filed in the U.S. District Court for the Eastern District of Michigan has sent a clear message to HR leaders globally: AI bias in hiring is no longer a theoretical risk; it's a litigious reality.

Hello H.A.I.R. Community,

This week, a new lawsuit filed in the U.S. District Court for the Eastern District of Michigan has sent a clear message to HR leaders globally: AI bias in hiring is no longer a theoretical risk; it's a litigious reality.

Plaintiff Arshon Harper, an African-American applicant, has filed a class action lawsuit against Sirius XM Radio, alleging that the company's use of AI/ML tools within its iCIMS Applicant Tracking System (ATS) systematically and intentionally discriminated against African-American applicants. This is a landmark case that directly challenges the use of AI in recruitment and serves as a critical wake-up call for organisations operating in the UK and Europe.

Let’s dive into it.

The Allegations: A Breakdown

The lawsuit claims that Sirius XM's AI-driven tools, including candidate matching and shortlisting features, evaluated applicants based on data points that act as a proxy for race. The plaintiff alleges that these data points, such as educational institutions, employment history, and even zip codes, led to a "disproportionate" rejection rate for qualified African-American candidates. Mr. Harper, a highly qualified IT professional , claims he was rejected for 149 out of 150 positions he applied for , despite meeting or exceeding the qualifications for the roles.

This case is being brought under Title VII of the Civil Rights Act of 1964 and Section 1981, alleging both "disparate treatment" (intentional discrimination) and "disparate impact" (a facially neutral policy that disproportionately harms a protected group).

Why This Matters to UK & EU HR Leaders

While this is a U.S. lawsuit, the principles and legal risks are directly applicable to HR leaders in the UK and EU. The legal frameworks may differ, but the core issue—algorithmic bias leading to discrimination—is a universal concern, particularly in the context of emerging regulations. The case serves as a stark reminder that as the deployer of a technology, an organisation cannot simply pass the buck to its vendor.

  • The UK's Equality Act 2010: The UK has robust discrimination laws. The Equality Act 2010 prohibits both direct and indirect discrimination. The Harper case's "disparate impact" claim is analogous to indirect discrimination in the UK. If a seemingly neutral hiring algorithm systematically disadvantages candidates based on protected characteristics (such as race), an organisation could face legal challenges. The onus would be on the employer to prove the tool is a "proportionate means of achieving a legitimate aim." Given the Sirius XM allegations, that would be a very difficult case to make.

  • The EU AI Act & GDPR: The EU AI Act, set to come into force, classifies AI systems used in recruitment as "high-risk." This imposes strict obligations on developers and deployers of such tools. These obligations include:

    • Governance & Risk Management: Organisations must implement a robust risk management system to identify, analyse, and mitigate risks, including bias.

    • Technical Documentation: Companies must maintain detailed technical documentation and a log of system activities to demonstrate compliance and transparency.

    • Human Oversight: The Act mandates that AI systems must be designed in a way that allows for meaningful human oversight.

    • Data Governance: The data used to train and operate the AI system must be of sufficient quality and free from bias, a direct challenge to the Sirius XM case.

The Harper lawsuit is a real-world example of the risks the EU AI Act is designed to prevent. A company found to be using an intentionally biased algorithm could face severe penalties, including fines of up to €35 million or 7% of their global annual turnover, whichever is higher.

What You Can Do Now: A Pragmatic Checklist

The key takeaway is that an organisation cannot simply outsource the risk to a vendor. The responsibility for fair and non-discriminatory hiring practices remains with the employer.

  1. Demand Transparency: When evaluating a new HR tech vendor, ask for evidence of bias audits. Do not accept vague assurances. Request a detailed breakdown of how the AI tool works and what data points it uses. Ask them if they've conducted an Algorithmic Impact Assessment.

  2. Conduct Due Diligence: Go beyond the vendor's marketing material. Check for any legal challenges or public complaints related to their tools. You are responsible for who you partner with.

  3. Implement Human Oversight: Do not allow a hiring algorithm to make final, automated decisions. Ensure there is a human in the loop to review and validate outcomes, especially at the point of candidate rejection. This is a key requirement of the EU AI Act.

  4. Audit Your AI: Regularly audit the performance of your AI tools to ensure they are not creating a "disparate impact". Analyse your application and hiring data for any significant differences in outcomes across demographic groups. This is a proactive measure to detect and correct bias before it becomes a legal liability.

The Sirius XM lawsuit is a major moment for AI governance in HR. It's a clear signal that the time for cautious adoption is over. The time for responsible, compliant, and legally defensible adoption is here.

Upcoming AI Training for Recruiters & HR Leaders
I’m running two practical 3-hour workshops this month designed to move you beyond “playing with ChatGPT” into using AI responsibly, effectively, and with confidence:

  • The AI-Ready Recruiter – 26 Aug

  • AI Readiness for HR Leaders – 28 Aug

You’ll learn the PRIME Prompting Framework, how to safeguard against bias and data risks, and how to embed AI into your processes the right way.

👉 Seats are limited — secure your spot here (group booking discounts available - reply to this email to enquire).

Here's how H.A.I.R. can help you put the AI in HR:

  1. H.A.I.R. Newsletter: get authoritative, pragmatic, and highly valuable insights on AI in HR directly to your inbox. Subscribe now.

  2. EU AI Act QuickScore Assessment: understand your organisation's EU AI Act Readiness in minutes and identify key areas for improvement. Take your QuickScore here.

  3. Advisory Services: implement robust AI Governance, Risk, and Compliance (GRC) with our 12-month programme designed for HR and Talent Acquisition leaders. Contact us for a consultation.

  4. H.A.I.R. Training Courses: enhance your team's AI literacy and readiness with our practical training programmes. Explore courses.

  5. Measure Your Team's AI Readiness with genAssess: stop guessing and start measuring your team's practical AI application skills. Discover genAssess.

Thank you for being part of H.A.I.R.

Until next time,

H.A.I.R. (AI in HR) 

Putting the AI in HR. Safely.

Reply

or to participate.