- H.A.I.R - AI in HR
- Posts
- The AI Metric You've Never Heard Of (But Need to Understand)
The AI Metric You've Never Heard Of (But Need to Understand)
A technical term called 'FLOPs' is about to become crucial for your HR AI compliance. Here’s what you need to know and what to ask your vendors.

Hello H.A.I.R. Community,
As HR leaders, we are increasingly reliant on AI to help find, assess, and manage our people. From recruitment screening tools to performance analytics, these systems promise greater efficiency and insight. But with new capabilities come new responsibilities, and the EU AI Act is formalising them.
Many of the AI systems used in HR are already classified as ‘high-risk’ under the Act. If those systems are built on powerful general-purpose AI (GPAI) models like GPT-4/5 or Claude, the compliance chain gets more complex.
There’s a technical term at the heart of this that you need to get comfortable with: FLOPs. It sounds complex, but understanding it is key to de-risking your AI strategy and holding your technology vendors to account.
Let’s break it down.
What on Earth are FLOPs and Why Do They Matter?
FLOPs, or Floating-Point Operations, are simply a measure of the total computing power used to train an AI model. Think of it like the engine size of a car or the tonnage rating of a cargo ship. It’s a proxy for the model's scale, capability, and potential for risk.
Under the EU AI Act, the bigger the FLOP count, the more stringent the rules. Regulators work on the assumption that models trained with immense computational power have a greater capacity to create systemic risks, whether through sophisticated bias, disinformation, or other unintended consequences. For HR, this "tonnage rating" directly impacts your compliance burden.
The Two FLOP Thresholds You Need to Know
The AI Act sets two critical thresholds for GPAI models. Understanding them tells you what obligations your vendors have and what documentation you should be demanding.
The Transparency Tier (geq1023 FLOPs): Once a GPAI model is trained using compute at or above this level, its providers (like OpenAI or Meta) must become more transparent. They are required to provide clear technical documentation covering the training data, the model's capabilities and limitations, and its risks. For you, this means your vendor should be able to supply this documentation to prove the underlying model is compliant.
To be clear, this is not a high bar. Even a relatively modest open-source model (like a 20-billion-parameter model) triggers this transparency threshold. Other models expected to fall in this category include OpenAI's GPT-OSS 120b, Kimi K2, and DeepSeek R1.
The Systemic-Risk Tier (geq1025 FLOPs): This is the heavyweight category. Models trained with this immense level of compute are presumed to pose systemic risks. Their providers face much stricter obligations including mandatory adversarial testing, comprehensive risk assessments, cybersecurity safeguards, and incident reporting.
To put this into perspective, this tier includes virtually all current and near-future state-of-the-art models. Any HR tool built on models such as OpenAI's GPT-4o (and likely GPT-5 models), Google's larger Gemini models, Meta's Llama 3.1 405b, Anthropic's recent Claude models, and xAI's Grok models will fall into this systemic-risk category.

If your HR software uses a model in this top tier, the knock-on effect is significant. Your organisation is now part of a compliance chain linked to a systemic-risk AI system. This elevates the level of due diligence required from you.
Your Action Plan: From Procurement to Governance
So how do you translate this technical detail into a practical action plan? It comes down to three core activities.
1. Enhance Your Vendor Due Diligence You must work with your procurement and legal teams to update your vendor questioning. Start by asking:
Which specific GPAI model is embedded in your system?
What is its training compute scale in FLOPs?
Can you provide the technical documentation required under the EU AI Act to demonstrate compliance, including bias testing results and data sources?
Your contracts must legally require vendors to comply with their AI Act obligations and notify you of any incidents related to the model they use.
2. Bolster Your Internal Risk Management Remember that most AI use cases in HR (like recruitment and selection) are classified as high-risk by default, regardless of the underlying model's size. This means your team must:
Maintain a dedicated risk management system for your use of AI.
Ensure meaningful human oversight is built into every process. Fully automated hiring or promotion decisions are not a defensible position.
Continuously monitor the system's outcomes for any evidence of bias or discrimination across protected characteristics.
3. Establish Clear Governance and Compliance Accountability for AI risk must be clear. This is not just an IT or procurement issue; it’s a core HR responsibility.
Board-Level Reporting: The use of high-risk or systemic-risk AI systems in HR should be reported through your organisation's formal risk governance channels, much like you would for sensitive personal data under GDPR.
Audit Readiness: Keep meticulous records. Document which GPAI system and version was used for which process, the documentation your vendor provided, and the results of your internal bias and fairness assessments.
The Bottom Line
The term FLOPs may be technical, but its implications for HR are entirely practical. It dictates the questions you must ask your vendors, the contractual assurances you need, and the level of internal governance you must implement.
By treating vendors of large GPAI systems as critical suppliers, you can build a defensible and responsible AI strategy. This is no longer just about efficiency gains; it's about demonstrating fairness, ensuring compliance, and ultimately, building trust with your candidates and employees.
Put the “Responsible” in AI for HR
Stay informed with the H.A.I.R. Newsletter: Pragmatic insights and authoritative commentary on AI in HR, straight to your inbox. Subscribe now.
Check your EU AI Act readiness with QuickScore: A rapid assessment to show where you stand today, and what you need to strengthen. Take your QuickScore here.
Advisory & GRC Programmes: Partnering with HR and TA leaders to design and implement AI governance frameworks that stick. Book a consultation.
Build your team’s AI literacy: H.A.I.R. training courses that move your people from “AI curious” to “AI capable.” Explore courses.
Assess AI readiness at the talent layer: With genAssess, evaluate practical AI skills internally and in your hiring process. Discover genAssess.
Thank you for being part of H.A.I.R. I hope this deep dive helps you navigate the complexities of AI in HR with greater confidence and control.
Until next time,
H.A.I.R. (AI in HR)
Putting the AI in HR. Safely.
Reply