First: A Solid, Dependable Data Bedrock – Then AI

Almost every organization - in life sciences as in other industries - is intrigued about the promise of artificial intelligence/machine learning as a means of delivering more value and better business outcomes. But smart systems’ intelligence and transformation potential relies on the quality, credibility and completeness of the data they are interpreting.

With this in mind, how can life sciences companies better deliver and maintain data integrity on a sustainable, ongoing basis?

Here we offer five best-practice tips for achieving a definitive, trusted regulatory and product information asset base which can support smart process automation.

In a client scenario not long ago, a large pharmaceutical company was attempting to establish a data warehouse where it could pull together different regulatory information from a variety of sources - including product registration details, submission planning data, and health authority commitment status. The project aim was to provide a real-time dashboard to accelerate and improve decision-making; a highly manual and time–consuming process at the time.

However, the project quickly hit a problem. There was no existing data governance, meaning no watertight means of knowing whether the data was reliable, complete and up to date. As the teams involved began to connect the data, they soon realized they couldn’t trust it. Without any assurance of the quality, integrity and credibility of the data, the data warehouse would be worthless as a business asset. As a result, the project had to be re-scoped: what started out as an ambitious data warehouse initiative became a data quality strategy and clean-up project.

Unfortunately such scenarios are all too common. Life sciences companies, in common with organizations across other industries, are aware of the huge potential of emerging technologies including artificial intelligence/machine learning (AI/ML) for rapid data analysis, complexing trending and scenario analysis, and transforming process delivery through intelligent workflow automation. Yet, in their keenness to harness these options, many companies try to run before they can walk, not realizing the potential of AI/ML is wholly dependent on the credibility of the data available.

Having trusted data is a substantial and essential first step in delivering AI/automation-based innovation and operational transformation. What’s more, this is not a one-time undertaking. It takes an ongoing ‘data quality sustainability’ program to be successful: companies need to be continuously checking/reviewing and enhancing the quality, integrity and completeness of the data sources that key processes rely on.

So what might this look like? What are the critical considerations and steps companies must take before they can progress to the interesting part – the application of AI to drive new insights and facilitate transformational process change?

Here are five critical elements companies must have in place before they can attempt to become smarter in their use of data.

Assigning Dedicated Roles and Responsibility Around Data Quality

Unless organizations assign clear and precise responsibility for ensuring consistent data quality, the integrity and reliability of the information available in the systems will suffer. Although this does not need to be a full-time undertaking, having someone whose remit clearly includes maintaining the integrity and value of data is the only way to ensure that any future activities drawing on these sources can be relied upon, and will stand up under regulatory scrutiny.

A 2018 study of Regulatory Information Management by Gens & Associates,1 which polled respondents from 72 companies internationally about their associated plans and practices, found that confidence in product registration, submission forecasting, and regulatory intelligence data quality was not high. When ‘confidence’ is low or moderate, organizations spend considerable time ‘verifying’ and remediating this information, with a direct negative impact on productivity.

Building confidence must start with imposing rules and procedures around data entry, to ensure that there is no risk of individual interpretation of the data requirements. Ongoing oversight over data quality is critical too, to ensure human errors do not build over time, eroding confidence in system data.

Data quality sustainability should be an organization-wide concern, necessitating a culture of quality and clear accountability for this as part of people’s roles - as appropriate. Allocated responsibilities should ideally include:

Quality control analysis. Someone who regularly reviews the data for errors - for example sampling registration data to see how accurate and complete it is. This role typically includes establishing quality control routines, managing predetermined metrics, and reporting weekly or monthly on conformance (or not) to naming conventions; workflows that are stuck; incomplete data; and how different regions are performing against agreed data-related KPIs.

Data scientist. Someone who works with the data, connecting it with other sources or activities (e.g. linking the company’s regulatory information management (RIM) system into clinical or ERP systems), with the aim of enabling something greater than the sum of the parts – such as ‘big picture’ analytics. This person would play an important role in the design and proof-of-concepts for a data warehouse or AI project, with an eye towards discovering any data quality issues as part of the integration/data consolidation activity. Data quality needs to be closely assessed and reported on before anything more ambitious is done with the combined information, or before rogue data fields from one system can contaminate another.

Chief data officer. With a strategic overview across key company data sources, this person is responsible for ensuring that enterprise information assets globally, including enterprise resource planning (ERP), RIM and Safety systems, have the necessary governance, standards and investments to ensure the data they contain is reliable, accurate and complete – and is monitored and maintained consistently over time.

Quality Control Routine

To steadily build confidence and trust in data, it is important to set down good habits and build these into everyday processes. By putting the right data hygiene practices into place, companies can avoid the high costs and delays caused by data remediation exercises, which can run into millions of dollars. Spending just a fraction of that amount on embedding good practice and dedicated resources pays dividends in the long run and is cost effective.

First: A Solid, Dependable Data Bedrock – Then AI

Operationalizing data quality standards is important. Spanning naming conventions and data standards, data links with content (e.g. registration status/approval letter), data completeness guidelines need to be applied consistently on a global basis. System automation can only take quality to a certain point; the accuracy and timeliness of entry are critical. As data checks are performed, to verify the quality of data and adherence to the agreed global standards, findings should be reported regularly to all key stakeholders.

It is also important to consider not all data quality errors are equal. Successful companies apply severity levels, to flag issues for urgent action and tracking of error origins, so additional training or support can be provided. To inspire best practice and drive continuous improvement in data hygiene, making data-quality performance visible can be a useful motivator: drawing attention to where efforts to improve data quality are paying off. This is critical for our next point.

Alignment with Recognition and Rewards Systems

Everyone likes to be appreciated for going the extra mile, so it is important to recognize people/teams/countries/regions making the biggest contribution or have made the biggest transformations in their data quality and upkeep. Recognition, via transparency, will continue to inspire good performance, accelerate improvements and bed in best practice, which can be readily replicated across the global organization to achieve a state of continuous learning and improvement.

Knowing what good looks like, and establishing KPIs that can be measured against, are important too. Where people have had responsibility for data quality assigned to them as part of their roles and remits, it follows they should be measured for their performance, with reviews forming part of job appraisals, and rewarded for visible improvements.

Creating a Mature and Disciplined Continuous Improvement Program

Gens & Associates’ 2018 research found that life sciences companies with a Regulatory ‘continuous improvement program (CIP) have 15 percent higher data confidence levels, 17 percent are more likely to have achieved real-time information reporting, and 21 percent have higher efficiency ratings for key RIM capabilities.

Continuous improvement is both an organizational process and a mind-set. It requires progress to be clearly measured and outcomes tied to business benefits. As the US management consultant Peter Drucker famously said, “If you can’t measure [something], you can’t improve it.” A successful CIP in Regulatory data management combines anecdotal evidence of the value that can be achieved and clear KPIs (cycle time, quality, volume etc.) that teams can aim towards and be measured against.

At its core, continuous improvement is a learning process that requires experimentation with ’incremental’ improvements. Over time, many small improvements lead to high performing organizations. We recommend collating multiple ideas from across the organization, performing root-cause analysis, and agreeing KPIs that help people focus on the main priorities for change.

Establishing good governance, and measuring for and reporting on improvements and net gains and how these were achieved (what resources were allocated, what changes were made, and what impact this has had), are critical elements too.

Data Standards Management

Intensifying international regulatory and safety initiatives are resulting in whole rafts of new specifications about how data should be captured, categorized, formulated and applied - to create greater harmony in information handling, and comparable product insights within organizations and across global markets.

Too often today, data is not aligned and standards vary or simply do not exist. The result is the right hand doesn’t know what the left is doing: ask representatives from Regulatory, Pharmacovigilance, Supply Chain, and Quality how they define a ‘product’ or how many products their company has, and no two answers will be the same.

The more that all companies keep to the same regimes and rules, the easier it will become to trust data, and what it says about companies and their products - as it becomes easier to view, compare, interrogate and understand who is doing what and how at a community level.

Evolving international standards such as ISO IDMP and SPOR (specifications around how information about substances, products and associated manufacturers/partners/systems must be captured and reported) mean that companies face having to add and change the data they are capturing over time. To stay ahead of the curve, minimize the impact of changes, and avoid the risk of noncompliance, life sciences companies needs a sustainable way to keep track of what’s coming, and a plan for adapting to and managing any new requirements.

Delegating this responsibility to persons responsible for Quality is likely to be unrealistic, as there is so much detail to keep track of. Regulatory specialists on the other hand may understand the broad spectrum of needs, but not how to optimize data preparation for the broader benefit of the business – for instance harnessing data standardization drives under IDMP to simultaneously create a robust data bedrock for AI-based analytics and/or intelligent process automation. This may be where organizations have to seek external help, i.e. with how to strike the optimal balance between regulatory duty and strategic ambition.

Future AI Potential Depends on Data Quality Sustainability Investment Today

From all of this, the important takeaway is companies cannot assume they will be able to innovate and transform their operations with AI and process automation relying on the data that is not properly governed tactically and strategically.

Certainly, technology continues to advance in leaps and bounds, and the temptation will be strong to harness the latest digital opportunities as soon as they can to deliver greater operational efficiency. But rushing straight to the promised rewards invites risk. With emerging technology’s potential continuing to grow, it is incumbent on organizations to formalise their data quality governance and improve their ongoing data hygiene practices now, to ensure these assets will be capable of supporting their ambitions for AI-enabled process transformation when the time is right.

References

  1. World Class RIM Whitepaper: Connections to Supply Release, Product Change and QMS, Gens & Associates, 2018: https://gens-associates.com/2018/10/10/world-class-regulatoryinformation-management-whitepaper-connections-to-supply-release-product-changeand-qms/

Author Biogrpahies

Steve Gens is the managing partner of Gens & Associates, a life sciences consulting firm specializing in strategic planning, RIM programme development, industry benchmarking, and organizational performance. [email protected]. www.gens-associates.com.

Remco Munnik is Associate Director at Iperion Life Sciences Consultancy, a globally-operating company which is paving the way to digital healthcare, by supporting standardization and ensuring the right technology, systems and processes are in place to enable insightful business decision-making and innovation. [email protected]. www.iperion.com

  • <<
  • >>

Join the Discussion