Why Pharma AI Projects Fail: The Human Problem Behind Technical Success

Eric Karofsky- VectorHX

Artificial intelligence (AI) is one of the most talked-about tools in the pharmaceutical world, not because of hype, but because of its practical potential. But repeatedly, promising projects stall - or outright fail - not because the models are wrong or the data is flawed, but because people simply don’t use the tools.

The uncomfortable truth? Most AI projects fail due to a lack of user adoption.

And that’s the real problem we must solve.

From Proof of Concept to Proof of Friction

In our work with a top 10 pharmaceutical company, we developed several AI-powered tools intended to make life easier for scientists and researchers. These include:

  • Systems to locate SOPs and procedural documentation buried across fragmented file shares, SharePoint sites, and legacy databases, often the result of years of acquisitions and inconsistent governance.
  • A network graph to identify internal experts based on experience, capabilities, and publication history, filling gaps left by incomplete or outdated HR systems.
  • Disease-specific knowledge repositories that synthesize insights from research, clinical, and medical affairs teams across global regions, even in the face of disconnected data, inconsistent formats, and a lack of centralized ownership.

Each project addressed a clear business need. And technically, they were successful. We connected disparate data sources, used intelligent algorithms to infer relationships, and delivered interfaces that surfaced key insights in real time.

But there was a catch: scientists weren’t using the tools.

Despite executive support, elegant architectures, and clear ROI projections, adoption lagged. Usage metrics stayed flat. Feedback was lukewarm. It quickly became clear that technical capability wasn’t the issue. The real problem was that the solutions weren’t usable, trusted, or aligned with how people worked. And as we dug in, we found more projects facing the same challenge - high on innovation, low on adoption.

When AI Becomes a Solution Looking for a Problem

Too many enterprise AI projects follow the same trajectory:

  1. Engineers build a proof of concept in a sandbox environment.
  2. The team secures the budget to scale it across the organization.
  3. The rollout focuses on technical metrics: model accuracy, processing speed, or volume of data integrated.

But here’s the disconnect: business users weren’t involved until it was time to deploy. By then, the ship had already sailed, and it wasn’t going to their destination.

We see this especially in pharma, where AI often becomes a solution in search of a problem. The tools may be technically impressive, but fail to connect with the day-to-day work of scientists, clinicians, or commercial teams. Worse, they often introduce more friction instead of reducing it.

Poor User Experience Creates Walls, Not Bridges

User experience (UX) is often the most overlooked - and underestimated - part of AI adoption. Here’s what we typically see:

  • Unintuitive Interfaces: Users struggle to understand how to interact with the tool. The design doesn’t reflect their mental models.
  • Confusing Workflows: Tasks that should take seconds end up taking minutes (or never get completed) because the flow doesn’t match real-world logic.
  • Overloaded Jargon: Systems require familiarity with technical terms or naming conventions from legacy databases, which users either don’t know or don’t care about.
  • Fragmented Data: Inconsistent metadata, outdated documentation, and misaligned schemas lead to incomplete or misleading results.

Put simply, bad experience equals low trust and low usage.

You Can’t Trust What You Don’t Understand

Users don’t adopt systems they don’t understand or trust. And the disconnect usually starts at the experience level, not the technical one.

  • A critical document never surfaces, even though the user knows it exists. To them, the system simply missed the point. Engineers might point to metrics like word error rate (WER) to explain what happened, but showing WER to an end user is irrelevant and only confuses the issue. Users aren’t evaluating models; they’re judging whether the tool works for them.
  • The tool frequently returns irrelevant or misleading results. Behind the scenes, this might be due to low precision or too many false positives, but the user doesn’t care about classification metrics. What they experience is a tool that wastes their time - and that’s enough to abandon it.
  • AI-generated recommendations come with no clear explanation. Even if the model makes a defensible choice, the lack of transparency makes it feel like a guess. In scientific environments, where logic and traceability are essential, this kind of opaque reasoning breaks trust quickly.

The bottom line? These aren’t just technical limitations - they’re experience failures. When AI doesn’t meet users where they are, even the most advanced models will sit unused.

It’s Not Just UX - It’s Organizational Psychology

Even when you do have a usable, trustworthy tool, another barrier remains: organizational resistance.

  • People worry that AI will replace their role.
  • Managers fear disruption to existing processes.
  • IT leaders are cautious about introducing “yet another system.”

This resistance isn’t irrational. It’s a sign that AI adoption isn’t just about software - it’s about change management. And unless your rollout strategy includes storytelling, stakeholder alignment, and training, even the best-designed tools will struggle.

A Better Way: Human-Centered AI for Real Business Impact

If AI is going to deliver on its promise in pharma, we must move from “technology-first” to “human-first” development. Here’s what that looks like:

1. User Needs Research

Before anything is built, understand who the users are and what they actually need.

  • What are their daily frustrations?
  • How do they currently find information or make decisions?
  • What does success look like in their eyes?

And a hint: the engineer is not the user.

Too often, systems are designed around the perspective of the builder, not the person doing the work. Real insights come from sitting down with scientists, regulatory staff, medical writers - whoever will be using the system day to day - and listening carefully. You’ll almost always uncover needs and pain points no algorithm could infer on its own.

2. Design Exploration

Instead of rushing into development, explore what the experience should look like.

  • Sketch workflows based on different user types and their goals.
  • Map out key interactions.
  • Test low-fidelity prototypes before committing to code.

Find your North Star: a clear, aspirational vision of what the solution could be at its best. Then work backward to define a Minimum Viable Product (MVP) that delivers immediate value while leaving room to grow. This approach helps align stakeholders and set realistic expectations early.

3. Intelligent Interface Patterns & Libraries

One-off AI projects are expensive, slow, and hard to maintain. What’s needed is a foundation: a reusable set of interface patterns and design libraries tailored for intelligent systems.

Too often, systems are designed around the perspective of the builder, not the person doing the work. Real insights come from sitting down with scientists, regulatory staff, medical writers – whoever will be using the system day to day - and listening carefully.

The goal isn’t simply better UX - it’s repeatable UX.

By building shared standards for how AI systems present information, gather input, explain decisions, and guide actions, you get:

  • Faster development cycles by avoiding reinvention.
  • Consistent user experiences across tools and teams.
  • Stronger trust and familiarity,  even with new use cases.

Think of it as your design and interaction “grammar” for AI.

If every project uses a different sentence structure, users must re-learn how to “read” each time. But with a consistent design language, they intuitively know what to expect and how to succeed.

Done right, this creates not just usability, but scalability. Future pharma projects benefit from the groundwork laid today.

4. Usability Testing

This isn’t optional. It’s where good ideas either evolve - or break.

Test early, test often, and test with real users. Observe where they hesitate, get confused, or go off track. Capture what delights them as well as what frustrates them.

Usability testing isn’t about validating what you think works. It’s about discovering what actually works for the people who matter most. Every round of feedback is an opportunity to improve both the product and the relationship users have with it.

5. Measure What Matters

Don’t wait until after launch to define success. The right metrics must be baked in from the start—not just for model performance, but for business value and user satisfaction.

Too often, teams celebrate precision or recall without asking:

  • Are users adopting the tool?
  • Is it saving time or reducing duplication?
  • Is it improving how work gets done? 

Build a scorecard that includes:

  • User engagement metrics (frequency, task completion, feature usage)
  • Qualitative feedback (trust, satisfaction, usability pain points)
  • Business impact (time saved, fewer errors, improved knowledge sharing)

And remember, showing your word error rate doesn’t mean much to an end user. But showing how many hours they saved last month? That’s meaningful.

6. Rollout & Storytelling

A well-designed tool still needs a strong narrative to land inside the organization. Partner with internal communications, training teams, and business unit leads to craft a compelling story, asking:

  • Why does this matter?
  • Does it make people’s jobs easier?
  • What early wins can we celebrate?

Don’t just deploy software - build momentum.

Highlight real success stories. Let early adopters become advocates. Treat rollout as a human journey, not a technical endpoint.

Final Thoughts: People First, Always

The future of pharma isn’t just powered by AI. It’s shaped by how well we design it around people. The systems that win will be those that blend intelligence with empathy, precision with usability, and innovation with trust.

If we start with human needs, frame the problem clearly, and build with adoption in mind, AI can finally fulfill its promise: not just as a powerful technology, but as a transformative business tool.

So, let’s stop building smart tools that nobody uses and start designing intelligent systems that people trust, value, and rely on.

 

Eric Karofsky is a leading expert in AI adoption, with a focus on designing user experiences that make artificial intelligence understandable, usable, and trusted. As founder of VectorHX, a human experience agency, Eric helps companies bridge the gap between cutting-edge technology and real-world engagement. He brings more than 20 years of experience in CX, UX, and employee experience strategy, working with major brands like Fidelity, The Hartford, Royal Caribbean, Michelin, Reebok, and the National Institutes of Health, and led UX and Voice of Customer for The Broad Institute of MIT and Harvard. Eric’s work centers on the belief that the future of AI depends not just on innovation but on the human experience.



Subscribe to our e-Newsletters. Stay up to date with the latest news, articles, and events. Plus, get special offers from Pharmaceutical Outsourcing – all delivered right to your inbox!

Sign up now!

  • <<
  • >>

Join the Discussion