Under the Spotlight: Agentic AI

An Interview With...
Senior Vice President, Product Management – AI
ArisGlobal

In an interview with Pharmaceutical Outsourcing, ArisGlobal’s Jason Bryant discusses the premise of AI’s next high-profile incarnation, the technology’s potential in late-stage pharma R&D operations, emerging early use cases, and some of the guardrails that need to be in place to ensure trusted, impactful use.

Artificial intelligence (AI) has become a must-have technology in late-stage pharma operations - both as a practical solution to processing soaring workloads, and as a means of transforming the work that companies do. That’s as the industry globally looks to harness the latest advances in science, protect itself against international geopolitical tensions, and withstand pressures on price and margins.

Now, more than ever, companies need to optimize and innovate in everything they do. As AI technology itself progresses, it is being deployed with mounting momentum and increasing ambition. Generative AI (GenAI), which rose to prominence with the launch of ChatGPT in November 2022, has proved a disruptive catalyst for whole industries - offering the ability to take what has gone before, amalgamate knowledge, distil key facts - including fresh insights - and present these in intuitive new ways. That potential is now being switched up again with agentic AI.

What has been achieved to date with GenAI in late-stage pharma R&D, and how does this lead into agentic AI and what’s ahead?

To date, functions including regulatory affairs and drug safety/pharmacovigilance, have harnessed GenAI technology to enable intelligent automation of highly labor-intensive routine processes. Right now, the technology is being actively used in marketing authorization application preparation, product change control/regulatory impact assessment management, adverse event case processing, and safety reporting. Through these proliferating applications, AI has contributed to tangible improvements in process cost-efficiency.

What impact have GenAI applications had?

Gains have included accelerated task execution, honed accuracy and consistency in process output, and large-scale resource optimization, as teams of professionals have recovered capacity for more strategic and challenging tasks. In deploying AI across these discrete use cases, pharma organizations and their function leads have learned a lot about the technology’s potential and how best to leverage it to get good and trusted results. All of this has paved the way for an even bigger wave of AI advancement - in the form of agentic AI.

Can you explain what agentic AI is, and how it differs from previous forms of AI?

Agentic AI is about the autonomous coordination of goal-driven AI “agents”. It is the most significant leap in AI technology’s development since ChatGPT’s launch in November 2022. This is down to the potential to redefine the way organizations operate, and the value they deliver – which in turn is thanks to the technology’s ability to apply its own reasoning, giving agentic systems new autonomy. It marks a step change from previous incarnations of AI which involved the automation of predefined processes according to given rules. Note the distinction between automation and autonomy.

With agentic AI, there is much greater autonomy in what AI does and how. Prompted with the desired outcome, individual specialist agents each invoke their own intelligence, experience and reasoning to fulfil their part in the most effective way possible. All of this is coordinated and governed by an “orchestrator”. As well as optimizing the end-goal delivery, the orchestrator uses the collective insights to propose new ways to add value.

What could this mean for pharma?

The ability to reason, anticipate, generate insight and knowledge and make better decisions is ideal for pharma - an industry that is data-rich, process-heavy and outcome-critical. Agentic AI is not just about doing the same things more efficiently and more accurately. It can help to challenge current processes and determine what else might be possible; what other opportunities might be leveraged.

Can you give a sense of agentic AI’s current status? Is it tangible today?

Today, pre-agentic AI is enabling new cost-efficiency in R&D functions such as regulatory affairs and drug safety/pharmacovigilance. Up to now this has tended to be in discrete areas such as marketing authorization application preparation, product change control/regulatory impact assessment management, adverse event case processing, and safety reporting. Agentic AI’s vision is more ambitious, potentially enabling step changes in the role played and value contributed by Safety, Regulatory and adjacent teams.

Take the use of AI to streamline Medical Dictionary for Regulatory Activities (MedDRA) coding of adverse events offers considerable potential to transform the value of pharmacovigilance. Already, AI has helped boost efficiency and accuracy around the classification of adverse event data, with the potential to invoke additional reference cross-checks, or expedite next actions. Combining autonomous MedDRA coding with proactive signal triage could help to eliminate manual bottlenecks.

Can you explain how?

If designated agents detect an unusual combination of coded terms, they could raise an automated “probable signal” alert; pre-populate a signal report draft (including proposed case lists, timeline and supporting evidence snippets); and recommend a triage priority for human safety reviewers. The time to first credible signal would be shortened, and experts freed to focus on ambiguous/novel cases and investigation design. The system could also route high-risk clusters to epidemiology/medical affairs automatically and suggest immediate risk-mitigation actions (e.g., targeted communications, batch holds, enhanced monitoring), boosting human decision-making.

What about in other functions or contexts?

In a regulatory context, opportunities for agentic AI include reinventing the global management of product regulatory compliance. Autonomous, “regulation-aware” dossier assembly and submission orchestration is within reach now, for instance. It’s possible for orchestrated AI agents to continuously ingest clinical data packages, study reports, CMC documents, eTMF pointers and legacy submission artefacts.

Agentic systems can also perform automated regulatory gap-analysis versus target-region requirements, draft region-specific CTD/eCTD modules (with citations and traceability to source documents), and orchestrate the technical packaging (file naming, folder structure, etc). Including human expertise remains important, but progression towards greater AI autonomy is about routing suggestions for human review when potentially ambiguous scenarios arise and an expert check is needed.

In the context just mentioned,, the agentic system could generate a short “decision rationale” and a list of recommended human checks, and run a rules/validation pass (file integrity, cross-reference checks, local appendices). This could inform autonomous routing of items to subject experts (e.g., CMC, clinical, labelling) with suggested edits and severity scores – providing human reviewers with a near submission-ready dossier.

Strategically, shorter regulatory cycle times promise to accelerate go/no-go decisions and speed up patient access, while sponsors would be in a position to iterate protocols more swiftly. Meanwhile agents’ gap-analysis outputs could be fed upstream to clinical operations and protocol teams, enabling trials to be designed that need fewer regulatory clarifications over time.

What is the best advice to leaders or key functional stakeholders wishing to try out agentic AI, or build it into their core capabilities?

Any intention to deploy agentic AI assumes that the organization has a strategic rather than tactical vision for AI. That means one that capitalizes on the technology’s cumulative benefits across more than one use case.

Ultimately this requires a more embedded and systematic approach to deploying the technology. The whole point of an agentic AI system is to deliver an end goal in the best way possible, empowered to choose how best to do that—drawing on and extrapolating from everything available to it. Agentic AI proffers the benefits of autonomous reasoning and decision making, as well as continuous adaptation, in reaching defined goals. The total benefits should multiply as respective agents continue to hone what they do, based on their own deductions or new insights.

What should this industry prioritize when evaluating agentic AI’s scope, given integrity, safety, transparency, and its indirect impact on patients?

Besides the obvious requirements around good data (AI output can only ever be as good as the information it is given to work with, and how readily this can be combined), those interested in exploiting agentic AI will need to consider how to establish and foster trust around AI reasoning. Where AI systems are being given new autonomy across extended workflows, potential risks could be more than just incorrect outputs – e.g. unintended data movement, loss of operational control, misaligned decision-making, and blurred accountability.

But companies also need to avoid being too prescriptive and limiting in their attempts to establish good governance (ideally, this should serve as a facilitator as well as a mitigator of risk).

Taking a principles-based approach, rather than one that is hard-wired around specifics, can better support process stakeholders in defining scenarios and goals that agentic AI can help solve. Companies can then supplement these principles with their preferred service-design methods - perhaps journey maps to set out how agentic workflows behave and evolve over time.

The idea is that trust emerges from the infrastructure itself, rather than relying on static checklists that date quickly. This is also a chance to provide for and adapt the degrees of autonomy that will be assigned to individual AI agents, setting out a path toward increasingly trusted use of agentic AI - and thereby fuller enjoyment of the technology’s benefits.


Jason Bryant is Senior Vice President, Product Management – AI, at ArisGlobal in London. A data science actuary with a background in fintech and health-tech, he focuses on AI-powered, data-driven, human-centric product innovation. He previously led a digital incubator at AstraZeneca and serves on the board of the health charity Scleroderma & Raynaud’s UK (SRUK). [email protected], www.arisglobal.com


Subscribe to our e-Newsletters Stay up to date with the latest news, articles, and events. Plus, get special offers from Pharmaceutical Outsourcing – all delivered right to your inbox!

Sign up now!

  • <<
  • >>

Join the Discussion