Skip to content
All posts

Stop Hiding Behind “Human-in-the-Loop”

Digital Approval

After two full days immersed in the critical conversations at KENX’s AI in GxP conference, I found myself reflecting on those experiences during a quiet, chilly afternoon in a Dublin café. There is something remarkable about the environment here, I felt at home surrounded by professionals unafraid to tackle the most demanding questions, engaging in the kind of candid and incisive inquiry our industry so urgently needs. Such rigor is uncommon and invaluable. My hope is that this mindset of genuine, challenging discourse spreads within their organizations, as the life sciences relies on this level of engagement to advance responsibly.

And yet I’m still left with a ping of worry because I'm starting to get really irked by the term “human‑in‑the‑loop” and I'm worried if I hear it one more time, I may actually flip my lid. 💥😱🤐😤 There many not be the right emoji to express my concern with the term. Why you may ask?? I've lived how this story ends before I hope I'm wrong and I'm not seeing history repeat itself. 

Why “human‑in‑the‑loop” is starting to sound like a cop‑out to me

For nearly 30 years I’ve fought hard to build risk‑based digital validation systems that actually protects patients and reduces real risk. We've HAD "Human-in-the-loop" in a variety of ways for the last several decades and yet the resistance to change has been meet with various levels of understanding and still to this day the vast majority of validation lifecycle systems are at best 'paper on glass' with multiple levels of 'approval signatures'. My concern is that today in many rooms "human-in-the-loop" is a defensive reflex: a way for quality leaders to limit AI in GxP, preserve old controls, avoid the hard regulatory conversations, and keep change at arm’s length.

Why Quality Teams Are Using the Phrase as a Shield and What It’s Costing Us

Walk into any meeting about AI in healthcare or life sciences, and you’ll hear the same phrase tossed around like a safety blanket: “Don’t worry, we’ll keep a human in the loop.”

It sounds responsible. It sounds safe. It sounds compliant.
But increasingly, it has become something else entirely:
A shield.

A way for quality teams to justify inaction.
A way to outsource the hard decisions.
A way to avoid learning new technology.
A way to preserve the comfort of outdated processes.

AI isn’t the problem.
Our dependence on slogans instead of systems is.

The Comfort of a Phrase and the Cost of Using It Wrong

When “human-in-the-loop” becomes the reflex answer to AI risk, two predictable failures follow:

1. Systemic risk remains baked in

Processes continue to rely on brittle human memory, reaction time, and cognitive load, factors we already know break under pressure. That’s why true autonomy remains rare in medical devices: trust, integrity, reliability, and competence must be proven at system level, not human level.

2. Talent drains while quality teams stagnate

Engineering races ahead with modern architectures, adaptive models, and continuous monitoring.
Quality clings to old controls that were designed for the pre-AI era. The result? Delays, inefficiencies, and organizations that look “compliant” on paper but are dangerously exposed in reality.

“Human-in-the-loop” doesn’t fix this gap. It institutionalizes it.

Three Blunt Truths We Can’t Keep Ignoring

Truth 1: Humans are not a safety valve for bad design.

If your process only works when humans behave perfectly, then your process is broken.
We know this. We’ve lived it. And we’ve seen the catastrophic consequences when the AI works… but the human system around it doesn’t.

In your presentation, you show exactly how this leads to errors: AI systems correctly diagnose risk or disease days earlier, but humans ignore, miss, or misinterpret the information—often with life-threatening results .

Truth 2: “Human-in-the-loop” is meaningless unless it is operationalized.

Who is the human?
What is the decision point?
What authority do they have?
What training? What metrics?
What escalation path?
What audit trail?
What confidence thresholds?
What fallback modes?

If you can’t answer those questions, you don’t have “human-in-the-loop.”
You have human-in-the-way.

Truth 3: Quality teams must learn the technology.

You cannot validate what you do not understand.
If you can’t assess model drift, dataset lineage, confidence scoring, monitoring strategies, or bias controls…
then you cannot meaningfully participate in modern quality work.

This is the same lesson behind why only a handful—less than 1%—of FDA-approved AI medical devices are truly autonomous.

Quality isn’t trained for this world.
And instead of learning it, many hide behind the phrase that protects their familiar process landscape.

System of Systems

Keeping humans “in” is not inherently safer.
Sometimes, it’s exactly the opposite.

Because humans get fatigued.
Humans misunderstand probabilities.
Humans ignore alerts they don’t trust.
Humans skip steps under pressure.
Humans introduce bias the model would never dream of.

We’re in an era where AI systems can outperform humans in diagnosis, detection, and prediction.
Yet we build workflows that require the human to be the brake, the validator, the safety net not because the AI needs it, but because the human wants comfort.

That’s not safety.
That’s avoidance.

What We Need Instead? Human-AI Systems That Are Designed, Not Assumed

“Human-in-the-loop” should not be the default.

It should be the outcome of a design choice rooted in context, risk, and intended use.

We need:

  • Defined human roles, not vague oversight

  • Structured intervention points, not ad hoc decision-making

  • Training as part of validation, not an afterthought

  • Metrics that measure both human and model performance

  • Clear authority lines for override vs. acceptance

  • A governance model that evolves as the model learns

And most importantly:

We need quality teams who understand AI deeply enough to make these decisions.

Not in theory.
Not in slogans.
In system design.

My Hope...

“Human-in-the-loop” is not a shield.
It is not an excuse.
It is not a substitute for understanding the technology we are validating.

AI is a tool.
Quality’s job is to shape how the tool is applied.
Not to hide behind a slogan that preserves old failures.

If quality teams want to remain relevant and remain trusted we must stop using a phrase to mask our discomfort with a rapidly changing world.

The future of validation is not human OR AI.
It is human-shaped AI and AI-shaped human workflows.

And that future starts when we stop hiding behind the loop…
and start redesigning it.