Quality Matters

Modernizing Validation: Why More Data Isn’t Always Better

Written by Dori Gonzalez-Acevedo | Dec 7, 2025 7:26:23 PM

 

At KENX Puerto Rico, I sat with practitioners who are deep in the complexity of validation and where and when AI intersects. The pattern was loud and clear:

Our technology has moved beyond 'standard' validation practices.
Our validation and governance mindset is still stuck in 1997.

We’ve clung to a belief that more artifacts = more assurance. Lately, that’s evolved into an even more dangerous myth:

“Collect as much data as possible. Just in case.”

That’s not risk management.
That’s liability management and not in the good way.

The Illusion of Control: Documents, Data, and False Safety

For years, our industry equated safety with volume:

  • More validation documents

  • More signatures

  • More attachments

Then came the next leap: if more documents are good, more data must be better.

It sounds reasonable. It feels thorough. It’s also flat out wrong!

Every extra dataset you collect carries:

  • More security exposure

  • More governance overhead

  • More conflicting “sources of truth”

  • More potential misalignment with your own procedures and policies

If you can’t curate, interpret, govern, and protect that data, it’s not an asset, it’s a compliance risk.

The job of quality and validation is not to hoard artifacts or data. It’s to answer one core question:

Do we trust this system to do what we say it does, in the context we say it does it?

That requires clarity, not accumulation.

AI Governance: Five Assessments, Zero Alignment

AI has poured gasoline on this problem. Across many multinational life sciences, I’m seeing:

  • A vendor assessment

  • A standard risk assessment

  • An AI-specific assessment

  • A security assessment

  • An impact assessment

All for the same tool. Each written by different groups. None truly reconciled. Everyone exhausted.

AI isn’t a separate religion. It’s another technology with:

  • An intended use

  • A risk profile

  • A lifecycle that must be governed

When we bolt on separate AI committees, AI forms, and AI processes, we’re not “modernizing.” We’re rebuilding the same fragmented mess we created in 1997 with better or different buzzwords.

The opportunity is to integrate AI into existing risk and quality frameworks, not tack it on as a fifth or sixth lens.

The Human Stakes: Why This Is Bigger Than SOPs

This isn’t an abstract governance issue for me. This year, my girlfriend was nearly killed, not by lack of technology, but by a system failure of people and process. Her data existed. Her risk was knowable. Her status was in the electronic record.

The ER never read it. That’s what happens when systems are fragmented, when we treat data as a checkbox instead of a lifeline, and when we design processes for compliance optics instead of actual decision support.

AI can help. It can also hurt. The difference is whether we prioritize trust, context, and intended use over theater.

From Hype to Practice: What “Good” Looks Like

If you want to see credible AI validation in action, look at the De Novo pathway and systems like IDx-DR:

  • Clear, narrow intended use

  • Locked algorithms under change control

  • Real-world data in multi-site studies

  • Continuous monitoring for drift

  • A defined bandwidth of acceptable outcomes, not a fantasy of identical results

You don’t need this level of rigor for every AI use case. But you do need a right-sized, risk-based approach anchored in:

  1. Fitness for purpose, not checkbox templates

  2. Evidence of trust, not evidence of activity

  3. Continuous learning, not one-and-done validations

The Conversation Continues: From Puerto Rico to Barcelona

What started at KENX Puerto Rico doesn’t stay there. At ISPE Pharma 4.0 in Barcelona this week, the real work is to connect the dots:

  • CSA and risk-based digital validation

  • AI and machine learning in real workflows

  • Pharma 4.0, data integrity, and human factors

  • Governance models that reduce noise instead of adding more

If you’re in Barcelona, this is your moment to challenge the “more data, more documents” reflex and start designing systems that actually serve patients, users, and regulators.

Want Help Reinventing at Your Site?

At ProcellaRX, we’re putting these principles into motion through:

🧭 The Courage to Reinvent

A body of work focused on rethinking validation, quality, and AI governance for the next decade.

🧪 The Reinvention Lab

On-site, immersive experiences that help your teams:

  • Identify where your validation and AI practices are stuck in old paradigms

  • Map out risk-based, modern approaches grounded in CSA and real-world data

  • Practice new ways of working across QA, IT, business, and regulatory

We’re now scheduling Reinvention Lab engagements for 2026.

👉 Email ProcellaRX today to bring The Reinvention Lab to your site in 2026 and give your teams the structure, and the courage, to move beyond “collect everything” and into trustworthy, modern, human-centered validation.