Status Of AI Regulation On Drug Development

By: Alexandra P. Moylan, Robert A. Wells, and Michael J. Halaiko, Baker Donelson

Use of artificial intelligence (AI) and machine learning (ML) across nonclinical, clinical, postmarket, and manufacturing activities has accelerated markedly. Regulators are now moving quickly to establish risk based expectations for when and how AI may support safety, effectiveness, and quality determinations. Drug developers–along with their contract partners (CROs, CMOs, CDMOs, and other vendors and consultants in the drug development ecosystem)–should assume heightened scrutiny, build documentation and traceability into AI development and deployment, and prepare for increased engagement with the FDA on AI enabled approaches throughout the product lifecycle.

Increased Use of AI in Drug Development

The FDA has confirmed that the Center for Drug Evaluation and Research (CDER) is seeing a significant increase in drug application submissions that incorporate AI components. This trend spans the full product lifecycle–nonclinical, clinical, postmarketing, and manufacturing. Given the high costs and long timelines associated with drug development–often approaching a decade and exceeding $1 billion–adoption of technologies that improve efficiency and resilience has been rapid.

Evolving Regulatory Frameworks

In the EU, AI-enabled drug development now sits at the intersection of the EU AI Act and EU data-privacy law (GDPR), creating a combined “trustworthy AI + lawful data” compliance stack that sponsors, CROs, and tech vendors must design for from the outset. The EU AI Act establishes a risk-based framework categorizing AI systems from “Unacceptable Risk”, which are prohibited in the EU, to “Minimal-Risk”, which have no mandatory obligations under the EU AI Act. AI used in regulated life-sciences workflows can often fall within the AI Act’s “High-Risk” category subjecting these systems to risk management, data/data-governance controls, technical documentation, logging/recordkeeping, transparency to users, human oversight, and accuracy/robustness/cybersecurity.

In parallel, GDPR continues to govern the clinical and real-world data that fuels these systems: trial datasets and linked biomarker/phenotype information are commonly personal data and often special category “data concerning health,” which is prohibited absent an Article 9(2) condition and must also satisfy an Article 6 lawful basis, while remaining constrained by core processing principles (purpose limitation, data minimization, storage limitation, integrity/confidentiality, etc.). This pushes EU drug-development programs toward “privacy and AI governance by design,” including embedding GDPR Art. 25 privacy-by-design/default measures into model development and deployment (e.g., limiting identifiability, access, retention, and downstream reuse) alongside AI Act-style lifecycle controls–an approach that aligns with the EMA’s emphasis on the principles for safe and effective AI use across the drug product lifecycle.

Like the EU’s framework for regulating AI, the FDA is taking a risk based approach to AI governance in life sciences. The FDA’s centers have articulated expectations for AI used to support regulatory decision making in drug and biologic development and have emphasized early engagement, transparent documentation, and lifecycle oversight of models. Across submissions, the FDA reports a sharp rise in protocols and analyses that incorporate AI components; it expects sponsors to justify context of use, articulate model risks, and demonstrate fit for purpose performance and controls.

The FDA’s January 2025 draft guidance–”Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products”–established the Agency’s blueprint for the use of regulatory AI to support decisions concerning safety, effectiveness, or quality in the drug and biologics lifecycle. The guidance outlines a seven-step risk credibility assessment to evaluate AI models, expecting sponsors, manufacturers, and other stakeholders to define the context of use, assess risks, plan credibility activities, execute and document validation, and determine adequacy for the intended regulatory purpose. FDA emphasizes the need for early engagement with the Agency and robust documentation of model design, training data, validation, limitations, and governance. These themes reflect CDER/CBER’s experience with hundreds of submissions containing AI components and extensive public input.

New FDA Guidance in January 2026 and Why It Matters

On January 14, 2026, the FDA, jointly with the European Medicines Agency (EMA), published “Guiding Principles of Good AI Practice in Drug Development,” a set of 10 high level principles intended to harmonize expectations across regulators and inform future binding guidance. The principles are designed to anchor good practices across the AI lifecycle–spanning design, data, model development, validation, deployment, monitoring, and communication–and include:

  1. Human-centric design: Alignment with ethical and human-centric values;
  2. Risk-based approach: Proportionate validation, risk mitigation, and oversight based on context of use;
  3. Adherence to standards: Adherence to relevant legal, ethical, technical, cybersecurity, and regulatory standards;
  4. Clear context of use: Defined role and scope for why AI is being used;
  5. Multidisciplinary expertise: Inclusion of appropriate subject matter experts;
  6. Data governance and documentation: Detailed, traceable, and verifiable documentation of data provenance and quality;
  7. Model design and development: Emphasis on transparency, reliability, generalizability, and robustness;
  8. Risk-based performance assessment: Evaluation of the complete system including human-AI interactions with fit-for-use data and metrics;
  9. Life-cycle management: Risk-based quality management systems to capture, assess, and address issues over time; and
  10. Clear, essential information: Plain language communications tailored to the intended audience.

Federal and State AI Policy Considerations

For drug developers, CROs, CMOs, CDMOs, and other vendors and consultants in the drug development ecosystem, recent federal AI policy now fits together as a relatively coherent, risk based architecture that runs from the White House through the FDA, NIST, and the federal consumer protection agencies. Administration initiatives promote rapid AI adoption, regulatory sandboxes, and domain specific efforts in sectors such as health care, while positioning agencies such as HHS, FDA, and NIST as coordinators of evaluation, standards, and best practices rather than sources of prescriptive, technical regulations. The FDA’s draft guidance on AI used to support regulatory decision making for drugs and biologics together with the FDA/EMA’s “Guiding Principles of Good AI Practice in Drug Development” establish sector specific expectations: sponsors should use AI in ways that are human centric and ethical, clearly scoped to a defined context of use, grounded in high quality data and robust validation, embedded in quality systems, and subject to lifecycle monitoring and change control in both development and manufacturing. These themes closely track the NIST AI Risk Management Framework, which organizes trustworthy AI into Govern, Map, Measure, and Manage functions and emphasizes legal/regulatory alignment, continuous monitoring, documentation, and iterative improvement.

Alongside this, the Federal Trade Commission (FTC) is asserting its role as the primary enforcer against unfair or deceptive AI related practices. Overstated claims about what AI systems can do, undisclosed use of AI that materially affects consumers, and biased or opaque automated decision making can all trigger liability under longstanding “unfair and deceptive acts or practices” authority. State attorneys general are pursuing similar theories under state consumer protection laws. For manufacturers and other drug development stakeholders, that means AI enabled tools used in marketing, patient support programs, real world evidence partnerships, or clinical trial recruitment must not only satisfy FDA expectations, but also be described accurately, used transparently, and monitored for consumer facing harms to avoid FTC scrutiny. External statements about AI in products, trials, or pharmacovigilance should be vetted against evidence, and marketing and investor communications should align to validated performance and limitations.

State regulators continue to expand protections around consumer health data outside HIPAA, creating obligations for transparent notices, consent, purpose limitations, individual rights, and security controls for datasets commonly used in AI development and analytics. This affects sponsors and vendors handling wearable data, patient-reported outcomes data, real-world data, digital biomarkers data, and de-identified datasets that could be re-linked. Programs should incorporate data minimization, differential access controls, suppression of sensitive attributes where feasible, and repeatability of de-identification/anonymization consistent with intended data sharing and model reuse. Legislative activity remains high with more than a thousand AI-related bills introduced across states in 2025 according to the National Conference of State Legislatures. This trend is likely to continue with the surge of AI use cases.

In response to the patchwork of state AI regulation, on December 11, 2025, President Trump issued an Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which seeks to establish a “minimally burdensome national standard” for AI and preempt state AI laws that the administration considers excessive. The order directs multiple federal actions, including creation of a Department of Justice task force to litigate against state AI measures, conditioning certain federal funds on state regulatory posture, and initiating potential federal standards through the FCC and FTC. The EO raises several threshold legal questions that may be subject to judicial review, including the scope of federal preemption and executive authority. While this legal battle over regulatory authority of AI systems plays out, companies in the drug development ecosystem should continue to follow FDA guidance while tracking developments closely and preparing for overlapping compliance considerations pending judicial resolution.

Implications for the Outsourcing Ecosystem (CMOs/CDMOs/CROs)

Outsourcing partners increasingly develop, validate, or operate AI components that feed into regulatory submissions or GxP decisions. To satisfy the FDA’s expectations, sponsors should ensure that contracts and quality agreements require:

  • Equivalent AI governance: Documented context of use, model risk classification, validation protocols, and change control aligned to the sponsor’s procedures.
  • Data governance and privacy: Clear data rights, provenance documentation, retention and deletion obligations, and restrictions on model reuse or transfer learning across clients.
  • Transparency and explainability: Access to model documentation, training data characteristics, and performance metrics, including subgroup analyses and limitations.
  • Monitoring and incident handling: Defined responsibilities for performance surveillance, triggers for retraining, notification obligations for adverse incidents, cybersecurity events, and quality impacts.
  • Auditability: Rights to review process documentation, training artifacts, and environments; readiness to provide FDA with credible evidence of system reliability and controls.

Practical Actions for Drug Developers

Although the AI regulatory landscape remains fluid, drug developers and their partners can implement concrete steps now to remain on the cutting edge while mitigating risk:

  • Inventory and risk rank AI use cases across nonclinical, clinical, manufacturing, and postmarket domains; designate those that inform regulatory conclusions or GxP decisions as higher risk and apply commensurate controls.
  • For each higher risk AI system, define context of use, performance targets, datasets, validation plans, human oversight controls, and lifecycle monitoring (including dataset shift and model drift), and document everything in a single traceable package.
  • Embed AI into existing data governance and privacy programs, with special attention to consumer health data. Align notices, consents, data subject rights, and retention schedules with how data are used for model training, validation, and sharing.
  • Update computerized systems and cybersecurity procedures to cover AI/ML operations, including access control, environment segregation, versioning, and incident response criteria that link cybersecurity events to quality events.
  • Enhance training for technical, clinical, and quality staff on AI validation, documentation, and oversight, including downstream interpreters of AI outputs.
  • Extend supplier management to AI: pre qualification, contractual requirements for AI governance and security, performance and change reporting, and audit reach through to subcontractors.
  • Confirm cyber and tech E&O insurance coverage addresses AI related incidents, including data, system integrity, and business interruption impacts.
  • Engage early with the FDA on novel AI uses to de-risk assumptions about context of use, credibility activities, and submission expectations.

While these risk mitigation strategies are not exhaustive and enforcement approaches are evolving, they offer a foundation for manufacturers to confidently expand their use of AI in drug development to accelerate getting medications to people who need them most.

Conclusion

As AI capabilities scale, the FDA expects sponsors and their outsourcing partners to treat AI like any other regulated, computerized system that can influence safety, effectiveness, or quality–define the role, document credibility, control the lifecycle, and monitor performance. Organizations that invest now in rigorous AI governance, cybersecurity, and supplier controls will be better positioned for efficient FDA interactions and resilient operations.


Subscribe to our e-Newsletters Stay up to date with the latest news, articles, and events. Plus, get special offers from Pharmaceutical Outsourcing – all delivered right to your inbox!

Sign up now!

  • <<
  • >>

Join the Discussion