ISPE Europe Annual Conference | Copenhagen, Denmark | April 20–22, 2026
Last summer, I was in London at an industry conference when my phone lit up with a cascade of messages.
My partner had a 104-degree fever. I had just read her MRI results in our patient portal. As a biochemist, I knew immediately what the imaging meant. Stage four metastatic prostate cancer, spread throughout her lymph nodes and bones.
My partner is a transgender woman.
Across six time zones, I watched the emergency room try to remove the estrogen she had been on for over a decade. The same estrogen that evidence now suggests had been extending her life by suppressing testosterone-driven PSA effects. The system that was supposed to protect her couldn't account for who she actually was. There was no dropdown in the EHR for a transgender patient. The AI read the imaging. The humans read the gender marker. Nobody reconciled the two.
That is not a regulatory gap. That is a design failure. A failure of the humans who built that system to ask, before they built it: who will actually be using this, and what will they need?
There have been making more human failures in the past year, her journey has been a struggle. It's a constant reminder to me how 'we' as part of the ecosystem can do better.
The ISPE Europe Annual Conference gathered almost 2000 practitioners, regulators, and executives across three days of technical sessions, keynotes, and the kind of hallway conversations that rarely make it into slide decks. I attended the digital compliance track, led by Oliver, Josef, Niels, and Heather. It was a cohesive thread across sessions covering AI governance, data integrity, validation architecture, regulatory frameworks, and the human conditions that determine whether any of it works.
What I heard, repeatedly and from every direction, was a single argument wearing many different clothes:
We already have what we need. We keep acting like we don't.
The frameworks exist. The methodology exists. Risk-based CSV has been the standard for nearly thirty years. The regulatory guidance is more flexible and enabling than most organizations assume. What the industry is missing is not better tools. It is the willingness to apply what we already know, to be honest about what isn't working, and to design systems for the humans who actually have to live inside them.
That willingness has a name in the work I do. It is Step One: Admitting the Challenge.
Everything that follows in this account is evidence of an industry circling that step from every angle, in every session, across all three days, and not quite landing on it.
On International Women's Day, I opened a panel by asking each participant to define a single word before we discussed anything else.
Define trust. Not as a value. As a precise term.
The hesitation was telling. Because trust is the word the pharmaceutical industry builds entire governance frameworks on, and almost never defines.
Mayer, Davis and Schoorman (1995) give us the gold standard: trust is the willingness to be vulnerable to the actions of another party based on the expectation that they will perform as needed, irrespective of your ability to monitor or control them. Three conditions make trust rational: ability, benevolence, and integrity. All three. Not one. Not two. And they must be demonstrated, not assumed.
Apply this to an AI system in a GMP environment. Can it perform reliably within its intended use? Is it designed in service of patient safety rather than operational convenience? Does it operate transparently, according to validated, auditable principles?
If yes, with evidence, you have the foundation for trust.
If you are answering with principles, intentions, and commitments to further guidance: you have aspiration.
The EMA's Dr. Hilmar Hamann spent his morning session invoking trust as a foundational principle for AI in regulated environments, and then described, in the same session, exactly how EMA validated their own AI tool before deploying it. Scientific Explorer: generative AI used by over 400 regulatory assessors daily. They defined the intended use. They tested performance against a manually maintained reference database. They involved end users in validation. They governed data protection and AI risk in parallel. They monitored at scale.
That is not a vague principle. That is Computerized Systems Validation applied to a new category of system.
So I will ask the question plainly: if the regulator validates AI before deploying it, what exactly are we still waiting for permission to do?
The afternoon's most instructive session was also its most honest.
A joint presentation from Boehringer Ingelheim and their equipment partner described a real-world RAG-based AI co-pilot built on five gigabytes of equipment manuals. It was designed to support lab operators at 3AM during fermentation issues, when no expert is available and the cost of a wrong decision is measured in batch failures and lost weeks.
The concept worked. The execution ran into the wall that every AI implementation in this industry eventually runs into.
The data was everywhere. Unstructured. Conflicting. P&IDs the LLM couldn't read. Tables it couldn't interpret. Document versions with no clear authority. Five gigabytes of information that contained the answers and couldn't reliably produce them.
The presenter said something I want every quality professional to sit with:
"We still ask for documents when we deliver a project. Nobody yet is really asking for the data behind the documents."
Our entire quality infrastructure is built on document management. Documents that contain data but are not themselves data. Documents that capture decisions but do not preserve the reasoning architecture that produced them. We have been printing the output and throwing away the input. Now we are trying to train AI systems on the printouts.
This is not a new problem that AI created. AI just made it impossible to ignore.
Admitting that is Step One at the data layer.
Lars Peterson, President and CEO of Fujifilm Diosynth Biotechnologies, said the thing nobody else was willing to say: pharmaceutical manufacturing facilities built in the 1980s and those built today look essentially the same. We keep building capacity instead of capability.
His ambition, one operating system across all sites with one governance model and one update architecture, is the Waymo model applied to pharmaceutical manufacturing. When you encounter a situation, you've never seen before, you simulate it ten thousand times and distribute the learning to every node in the network.
"We're never more than five years away from somebody eating our lunch."
He was not speaking hyperbolically.
Suzanna Olsen of Roche named the mechanism that keeps us stuck: the complexity tax. Siloed quality functions, fragmented IT landscapes, governance layers that add weight without adding judgment, platform standards that drift into a thousand local configurations. The changeover at some facilities now takes longer than production. The complexity tax is paid in speed, in quality, and in the cognitive overhead of every professional trying to do good work inside a system designed for a world that no longer exists.
Her prescription: brilliant basics. Lean operations, rock-solid data foundations, AI as deliberate infrastructure investment rather than headline. Her challenge, which I am still turning over: respond from every seat.
In quality and validation, we have a long history of responding from the back seat. Reviewing what others built. Validating what others designed. Documenting what others decided. That posture is no longer viable. With AI, it is an organizational liability.
UCB's Nadia provided the most honest case study of the day: a system in live operation that had never been validated. Inspection pressure revealed it. Fifty CAPAs discovered in quality management review. Draft documents never approved. Requirements never updated. Invisible problems because the data was scattered across Excel files, Word documents, and multiple databases with no single source of truth.
Her lesson: the problem was not that people were doing validation wrong. The problem was that nobody had visibility into the full picture. The data existed. It was just everywhere.
Frozen data is not safe data. It is invisible liability.
Her other lesson, which I have been giving organizations for years: stick to out-of-the-box configuration. They over-customized, went back to standard, and are now benefiting from vendor upgrades they would have missed. The impulse to make the tool fit every nuance of your existing process is exactly the impulse that perpetuates the complexity tax.
Robert Han has spent 25 years at a major global pharma company. He called it the dentist syndrome, and I think he named our entire industry's dysfunction in two words.
If the dentist comes, you're going to have a problem. So you act. Not because you care about your teeth. Because the dentist is coming.
He described sitting through risk assessments where the highest-ranked risk, the one driving the entire validation strategy, was not patient safety. Not product quality. Not data integrity.
It was compliance. The risk of not looking compliant. The risk that an inspector would come and not like what they see.
I cannot count the number of organizations that have built entire validation programs around the imaginary inspector standing behind the door rather than around the real patient at the end of the process.
Tala Fakori, formerly of FDA, named the disease that keeps the dentist syndrome alive: pilotitis. Every organization has a list of hundreds of promising AI pilots. They die in quality review. They die in legal. They die when someone in middle management says the thing that kills more good ideas than any regulator ever has: we can't be the first.
Her answer was direct. The FDA's AI framework is proportionate, risk-based, and not prescriptive. Know your context of use. Understand your model risk. Show that a human being is genuinely responsible for what the AI outputs. That is not a new framework. That is CSV applied to a new category of system.
The EMA's Edokia Korakianti Niki said it from the regulatory side: innovation does not require more regulation. It requires more engagement. The regulatory frameworks we have are more flexible and more enabling than the industry assumes. They are built on principles rather than specific technologies, and those principles accommodate evolution.
The afternoon regulatory panel converged on a single word when asked what one action would move the dial in the next twelve to twenty-four months.
Engage.
Not as a platitude. As a specific commitment: to pilots, to comment periods, to early regulatory conversations before the dossier is written, to sharing what didn't work alongside what did. Marcel, chair of EMA's Quality Innovation Group, put it simply: "You are the expert. Come and tell us what's happening so we can understand it. If you demonstrate that you know your process and your product, we can work with anything."
There is a pattern I have been watching for years in the digital validation space, and Copenhagen crystallized it.
Organizations are not resistant to improvement because they don't understand their problems. They are resistant for the same reasons patients don't take their medication: present bias, complexity avoidance, authority resistance, and the conviction that feeling fine today means nothing is wrong.
The open-source fragmentation problem, where well-intentioned groups spin up subgroups that each solve the same problem in isolation while protecting proprietary gates, is a symptom of an industry that has not yet accepted that quality infrastructure is not a competitive advantage. It is shared terrain.
The analogy that keeps resonating with me: cellular roaming. Today you travel across countries with no data disruption because the industry accepted commodity infrastructure as the foundation and competed on the experience layer. The underlying protocols are not proprietary. The applications built on top of them are.
Pharmaceutical data infrastructure is not there yet. Our underlying data structures are siloed, proprietary, fragmented. Every vendor in the validation lifecycle management space wants standardization in principle and fights it in practice because their pricing power lives in the lock-in.
I said it in the hallway in Copenhagen, I say it often and I will say it again here: Quality is not proprietary. Progress is not optional.
The sessions across three days were technically strong, individually. What they were collectively was a community circling the same center from different angles, in different languages, with different levels of willingness to say the quiet part out loud.
The center they were circling: we have the frameworks, the methodology, the regulatory guidance, and the technology. What we have not yet built, consistently and at scale, is the human infrastructure that makes all of it actually work. The culture that acknowledges problems rather than minimizing them. The leadership that designs for the person doing the work rather than the inspector reviewing it. The literacy that lets every professional understand what the system is doing and why.
In the book I am publishing later this year, I describe this moment, the one before genuine transformation is possible, as Step One: Admitting the Challenge. Not the plan. Not the technology. The willingness to give an honest diagnosis before reaching for a prescription. To look at the systems you have built, the SOPs nobody reads, the validation packages documenting states that no longer exist, the AI pilots dying in middle management, and say clearly: this is not working, and I am responsible for helping change it.
ProcellaRX built the Reinvention Lab because that step is the one most organizations cannot do alone. We brought together 25 practitioners from across the industry not to share best practices, but to share honest assessments. To sit in a room of peers and say what the industry keeps presenting as solved that isn't. To do Step One together, with the protection and the accountability of a cohort.
The 2030 white paper was the output. But the more important result was a group of people who had named the problem clearly, together, without the cover of a polished slide deck.
That is what "better together" means in this context. Not shared tools. Shared truth.
What emerged from that work, and what I saw evidence of across three days in Copenhagen, is the foundation of what I call human-centric design quality intelligence. Not a product category. A standard. The recognition that every framework, every agent, every validation strategy ultimately passes through a human being, and that human being deserves systems designed for their judgment, not against it.
We are building toward that standard collectively. Copenhagen was three days of evidence that the industry knows it. The question is whether it is ready to admit the challenge out loud.
Annex 22 is coming. When it lands, it will formalize a regulatory framework for AI in pharmaceutical manufacturing. Organizations treating AI governance as a future-state problem will find themselves years behind.
But the harder truth is that Annex 22 should not be necessary.
The organizations that will be ready are not waiting for it. They are applying CSV, the risk-based context-driven validation methodology this industry has had for nearly thirty years, to the AI systems they are deploying today. They are defining context of use. They are generating objective evidence. They are building continuous governance rather than point-in-time compliance. And they are designing their systems for the humans who have to live inside them.
We already have everything we need.
The question is not when regulators will tell us how to handle AI in validation.
The question is what exactly we have been waiting for.
Read the companion piece: The Medicine You Keep Refusing — the behavioral science behind why organizations resist what they already know they need to do, and what Step One looks like when an industry finally finds the courage to take it.
Dori Gonzalez-Acevedo is the Founder & CEO of ProcellaRX LLC, a women-owned, LGBTQ+, and minority-certified strategic consulting firm specializing in CSV, CSA, Digital Validation, and Quality Transformation. She co-leads the ISPE Digital Validation Subcommittee, co-authored the ISPE Good Practice Guide: Digital Validation, and hosts the podcasts Software Quality Today and Women Leading Validation. Her book, The Courage to Reinvent, is forthcoming September 2026.