AI in Pharmacovigilance: Key Takeaways from Boehringer Ingelheim’s Recent Rollout

An Interview With...
Head of Patient Safety and Pharmacovigilance Operations
Boehringer Ingelheim
Chief Safety Product Officer
ArisGlobal

In conversation with Pharmaceutical Outsourcing, Claudia Lehman, Head of Global Pharmacovigilance Operations at Boehringer Ingelheim and Lucinda Smith, ArisGlobal’s Chief Safety Product Officer, reflect on some of the lessons learned from Boehringer’s deployment of AI to date in targeted use cases, starting with adverse event case intake and processing.

Why AI?

What prompted Boehringer Ingelheim to deploy AI in a patient safety context?

Claudia Lehman, Boehringer Ingelheim (CL): A lot of our work over the last five years led up to this point. Rule-based automation has already produced some considerable efficiencies and quality gains, but we knew there was still an untapped opportunity, and the rise of Generative AI (GenAI) was well timed.

We had always wanted to harness AI as part of our pharmacovigilance (PV) activities, and in time we expect to do this in aggregate document writing, analytics, and other tasks. The important thing was to get started though - to build experience and expertise in a relatively safe space and then broaden its application over time.

We identified case processing/case intake as good use cases for AI. Compared with other projects in patient safety, this type of application is easily controllable through human review. It also gave us the chance to apply what we’d learned about the different types of stages of structured information, and approaches, that work best.

Lucinda Smith, ArisGlobal (LS): This incremental approach is a good one. The risk with delaying AI uptake until a more optimal moment is that you risk the early wins, such as operational efficiencies, as well as the chance to build knowledge. With the pace of technology advancement as it is, the longer you wait, the more you fall behind.

Scoping the Deployment

Did you set up a project?

CL: Because we were starting small, we defined this as a technical initiative - with scope to control and keep monitoring it. We wanted to do this in a very quick and flexible way, close to the topic decision making - enabling continuous interaction.

Our computer system validation group guided us too - in how to factor in use of AI in our validation and testing plans, and in the risk assessment we would need to do (what we would have to document; and how we would assess mitigation and outline quality control processes). You can’t just jump into AI without understanding and planning for all of this. We had to factor in our case processing vendor here as well, because they would be performing the quality control and would need to understand where the information is coming from, and how AI-enabled activity would differ from existing, validated, rule-based automation. Taking the time to work through this also ensured the vendor’s team didn’t see the technology as a threat.

Technical Hurdles and Change Management

Did you run into any technical issues along the way?

CL: When we started testing in the sandbox [a controlled, isolated environment used for testing software/code without affecting live systems or data], we experienced some issues arising from differences in the interfaces.

This is because the AI functionality was integrated into existing workflow automation. It caused some frustration when it didn’t work. But it was a useful reminder of the need to consider and re-validate not just individual elements but also the overall process, when introducing or enhancing automation. Process qualification means testing that everything that goes in the AI engine comes back, for instance; that the fields are extracted into the right data points in the system; and that the whole process still works within the system. Initially that wasn’t the case but, with adjustments in the system, we made it work.

LS: Even at this scale of initiative, change management is critical – due to changes to the way people operate, to their mindsets, to processes, and to culture. There is a need to build trust, as well as skills.

CL: We had an advantage here, having started our automation journey at least five years earlier. We had experience of where and when reviews of data are needed, for instance, and when we might be able to let go of a particular anxiety as we move away from manual processes and change the associated controls.

We have adopted the “Four Eyes” principle (a second set of eyes) which has been helpful in building confidence. It ensures an optimum level of control over data as we defer increasingly to AI. We need to be mindful of the risks. We’re providing a wealth of guidance for users on everything from “What is AI?”, “What is an algorithm?”, and “What is inference?”, to “What are hallucinations?”, and “What is the risk?” Although not everyone needs to become an AI developer, teams must understand safe use and the personal accountability that sits with each user.

There is a middle ground between blindly trusting AI and being so risk-averse that you reject the technology. When AI suggests something, we have to understand it and be able to rationalize it, using our logical minds. People need to learn to discern where the technology genuinely adds value.

Critiquing Processes: Reviewing the Relative Roles of Tech and Human Teams

How do you see people’s roles evolving, as AI becomes more embedded in PV?

CL: AI presents an opportunity to think about the way we work, and to suggest ideas if there is a process we don’t like. Even now there are practices we have carried forward from the times when they were manual and paper based. Introducing AI presents a chance to review whether there is scope to reinvent a process in an electronic/digital context. The end goal, though, is always good-quality data and a robust PV system.

It’s also in this context that we need to think about the evolution of PV roles. If we look across a whole process, where do our experts need to jump in; where does AI help; where and how does rule-based automation contribute; and where will targeted training help people make a positive difference? That could be in analyzing exceptions, for instance, as technology takes over more of the transactional work.

The same scrutiny should apply at the case processing vendor’s side. The more that these companies can harness automation options including AI to streamline transactional work, the greater the scope for their own teams to add new value.

Revisiting PV’s raison d’etre

Based on everything you have learned to date, what are your main recommendations to other pharma companies considering deploying AI in their own PV activities?

CL: It would be to get started, and build experience. If you start small, you have a chance to iron out issues before extending AI-based automation to larger work volumes or new use cases.

The wider opportunity is to capture rich information that might otherwise be missed, from free-text patient narration. Every patient that calls us, and tells us their story, adds to our understanding of the safety profile of the drugs in what they say. Even a non-serious case might include something in the free text that points to serious event information, and we cannot ignore that. We owe it to the safety of our patients to distil more of those critical insights.


Claudia Lehmann is Head of Patient Safety and Pharmacovigilance Operations at Boehringer Ingelheim.

Lucinda Smith is Chief Safety Product Officer at ArisGlobal. She previously worked in frontline scientific and strategic Pharmacovigilance and Drug Safety roles at a major pharma brand for more than two decades.


Subscribe to our e-Newsletters Stay up to date with the latest news, articles, and events. Plus, get special offers from Pharmaceutical Outsourcing – all delivered right to your inbox!

Sign up now!

  • <<
  • >>

Join the Discussion