Establishing Meaningful Quality Metrics for Vendor Oversight

In response to the COVID-19 crisis, the life sciences industry is investing in new and innovative therapies, as well as eClinical solutions such as those that leverage artificial intelligence and machine learning to enable innovative study designs and virtual clinical trials. This flourish of R&D activities, coupled with the growing uncertainty surrounding the global economy amid the pandemic, may lead to an even greater reliance on outsourcing, mainly because strategic partners have the expertise and capabilities to scale clinical trial services as needed, quickly and cost effectively.

Even beyond times of crisis, the push to accelerate the development of novel therapies to enable precision medicine is driving biopharmaceutical, biotechnology and drug device companies to seek out new models to increase clinical trial efficiencies and reduce costs and timelines. And to accomplish this, sponsor organizations are increasingly relying on outsourcing services offered by a range of clinical trial service providers, from full-service CROs to technology vendors. According to a Frost & Sullivan report the Global CRO market, valued at $45.8 billion in 2018, is expected to grow at a CAGR of 7.9 percent, reaching $71.7 billion by 2024.

This growing shift has led bodies such as the International Council for Harmonisation (ICH) of Technical Requirements for Pharmaceuticals for Human Use to address regulatory concerns related to vendor oversight, as reflected in the ICH Guideline E6(R2). Of particular note is a new, additional expectation of sponsors outlined in section 5.2.2: “The sponsor should ensure oversight of any trial-related duties and functions carried out on its behalf, including trial-related duties and functions that are subcontracted to another party by the sponsor’s contracted CRO(s).”

Meeting this requirement isn’t simple. Sponsors must put appropriate vendor oversight measures in place and actively manage them. Beyond documenting oversight as it relates to vendor selection, sponsors also need to collect evidence of ongoing oversight throughout the lifecycle of the project. While the scope and specific measures implemented vary from sponsor to sponsor, what consistently comes up as a key challenge among life sciences organizations is establishing meaningful metrics for vendor oversight.

Some pharmaceutical companies and biotechs have traditionally taken a more hands-off approach, while others have established complex and time-intensive processes that can feel like micromanagement to their CROs and other service providers. Putting in place the right metrics can help sponsors and their research partners find the right balance or a common, middle ground, where the sponsor has sufficient data to be confident that it has its finger on the pulse of all outsourcing activities and the quality of performance attached to each.

Measuring What Matters

Among sponsors, we often hear that they receive lots of metrics from their CROs and other vendors that show success and high quality, even when there are obvious gaps, signs of issues lurking or problems that need to be addressed. ‘Why is it that metrics come back saying everything is great, when things don’t seem to be going well?’ is a question that comes up time and time again among sponsors. The answer could be as simple as the right metrics aren’t in place, which is often the case because certain dimensions of performance are not proactively being measured.

Look at protocol approvals, a certain date or timeline is often cited as the key metric or performance indicator. And yet, so many things are dependent on meeting that deadline. If a protocol needs to be amended, for example, it creates a series of events leading to significant delays, pushing back timelines. So how can the approval date alone be a fair or meaningful metric for vendor oversight?

Metrics Champion Consortium (MCC) members, representing more than 80 organizations across the global life sciences ecosystem including clinical trial sponsors, CROs and other service providers, set out to address the challenges around identifying metrics for vendor oversight and align on the most valuable measurements. A key takeaway from their monthly discussions, over a period of three years, is that what is most important is not to start with all the things that you can measure, but rather to base your thinking on the questions that the metrics should ideally help to answer.

Establishing Meaningful Quality Metrics for Vendor Oversight

With that in mind, MCC members initially looked at three areas typically used to assess service provider performance: cost, time and quality. In addition, the group decided to include a fourth dimension: the relationship between sponsors and their clinical research partners. A partnership, by its very nature, is a relationship and ensuring that the sponsor and vendor act as one, integrated team is critical to the overall success of the project.

All of these areas or dimensions can apply to any service provider, whether a central lab, imaging vendor, technology partner or another group. Of the four dimensions outlined by MCC members, however, quality is frequently cited as the most challenging to measure and is the one that sponsor organizations have a tendency to miss. This, in turn, creates further issues, because a deliverable that meets timeline and budget requirements is of no use if the quality doesn’t meet expectations.

To avoid such misalignment, sponsors can incorporate metrics to assess whether work products meet the required quality standard. But metrics such as these provide data after the event - they are lagging metrics. MCC members sought to develop leading metrics that might indicate whether outputs are likely to be delivered with the appropriate level of quality while they are still in progress. Training of vendor staff, staff turnover and issue management and escalation are just some areas where sponsors can apply these quality metrics, before deliverables are due.

Inconsistencies also result from the way sponsor organizations use surveys as tools to assess the quality of their relationships with their vendors. Often, the lack of consistency can be attributed to surveys that haven’t been designed with the end in mind – again, the question the metrics should answer. Rating scales that are used to measure agreement with statements are useful, but can create confusion or unnecessary complexity when there are too many statements to rate. Similarly, a large number of open-ended questions can be difficult to interpret. Additionally, analyzing the data gathered from surveys isn’t always straightforward. Comparing average scores between statements and trending those scores over time is a common approach to analysis, but can mask important details. For example, where there is significant deviation of opinions, a simple average will not be a sufficient summary of the data. Even if a statement has a reasonable average score showing agreement, a significant number of people could still strongly disagree with that statement.

This lack of standardization across questions and responses in surveys makes it difficult to accurately and consistently measure the value of relationships over time.

Overcoming the Quality Challenge: A Framework for Success

MCC developed a framework based on defining critical success factors to help tackle the challenges associated with measuring quality as it relates to vendor oversight. To illustrate how the framework works, consider CRO staffing. A win would be having a qualified, trained team carry out its responsibilities with minimal turnover. See Figure 1.

Now that the critical success factor has been established, the next step, as MCC members agreed, is to develop key performance questions. In this case a few might include: 1) Did CRO staff meet the sponsor’s qualification requirements prior to beginning the project? 2) Did CRO staff members receive adequate training on the study protocol prior to beginning the project? and 3) What percentage of CRO staff members have changed on the project team?

Once a question has been articulated, metrics can be designed to address them. Continuing with this staffing example, what can be measured is the percentage of CRO staff who met qualification requirements, received adequate protocol training and are still working on the project team.

Opportunities to Collaboratively Enhance Oversight of Quality

ICH E6(R2) provides an opportunity for sponsors to become more actively engaged in overseeing the quality provided by CROs, vendors and their subcontractors across portfolios. The guideline encourages sponsors to align with their research partners early on to proactively identify the critical data and processes associated with protocols, assess associated risks that will inform risk reduction and monitoring strategies and perform ongoing risk management throughout studies. If these activities are completed effectively and measured in a collaborative and transparent manner, along with oversight metrics across the four dimensions, sponsors will benefit by having the data they need for adequate oversight of the performance across all vendors, including subcontractors. This approach will also reassure sponsors that they are not just informed of issues as they arise, but also actively engaged in preventing the issues from occurring in the first place or reoccurring in the future.

While many sponsors have traditionally relied on a more reactive approach, today they can also be more proactive when it comes to quality oversight of subcontractors. After all, the ICH guidelines make it clear that it is the sponsor’s responsibility to ensure oversight of all services, inclusive of those CROs subcontract to third-party vendors. Frameworks, such as the one outlined above, can help sponsors generate evidence that their CROs and vendors are actively managing the quality of subcontractors’ performance throughout clinical studies and projects.

Successful implementation of such metric frameworks will require ongoing transparency among all involved. Sharing assumptions and expectations, as well as implementing a continuous, holistic approach to both data collection and monitoring of key risk and performance indicators are critical. Ultimately, what it comes down to is a true process of teamwork and communication between all parties to ensure alignment and a productive, long-lasting relationship among partners.

  • <<
  • >>

Join the Discussion