This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Skip to main content
United Kingdom | EN-GB

Add a bookmark to get started

Cortex - Life Sciences Insights

| 3 minute read

European Medicines Agency opens dialogue on use of AI in pharmaceutical life cycle

To open a dialogue with developers, academics, and regulators, the European Medicines Agency (EMA) recently published a draft Reflection Paper on the use of Artificial Intelligence in the medicinal product life cycle. The Reflection Paper follows the US FDA’s similar May 2023 discussion paper, Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products.

The Reflection Paper is open to consultation until December 31, 2023, after which time the EMA will consider the feedback and finalize the document. In addition, we can expect the EMA to provide separate guidance on risk-management and to update existing guidance to account for AI/ML-specific issues.

Below is a summary of the main areas of concern in the Reflection Paper.

Considerations for use of AI/ML

  • The Reflection Paper makes clear that marketing authorization applicants and marketing authorization holders shoulder the responsibility for ensuring any AI/ML they use are “fit for purpose and are in line with ethical, technical, scientific, and regulatory standards as described in GxP standards and current EMA scientific guidelines.”
  • EMA sets out a “risk-based approach for development, deployment and performance monitoring of AI and ML tools,” where the level of risk is determined by a variety of factors, including the technology itself, the circumstances of use, the degree of influence exerted by the technology and the stage of life cycle.
  • Explainable AI is encouraged wherever possible. Black box models may be acceptable if there is adequate substantiation for why transparent models are unsatisfactory.
  • Of note, the Reflection Paper states: “Models intended for high-risk settings (in particular, non-transparent models intended for use in late-stage clinical development) should be prospectively tested using newly acquired data.”
  • Some recommended approaches to using AI/ML technologies set out in the Reflection Paper include:
    • Carry out a regulatory impact and risk analysis (the higher the risk, the sooner engagement with regulators is recommended).
    • Employ traceable data acquisition methods that avoid the integration of bias.
    • Maintain independence of training, validation and test data sets.
    • Implement generalizable and robust model development practices, following a risk-based approach.
    • Undertake performance assessments using appropriate metrics.
    • Follow ethical principles defined in the guidelines for trustworthy AI and presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI) and conduct systematic impact analysis early on for each project.
    • Implement governance, data protection and data integrity measures.

AI in the life cycle of medicinal products

The Reflection Paper considers various use cases for AI, including some commentary on working principles and indications of risk:

  • Drug discovery: use of AI in drug discovery may be low risk where non-optimal performance primarily affects the sponsor, but regulatory risk increases where results contribute to the evidence presented for regulatory review.
  • Non-clinical development: AI/ML should follow Good Laboratory Practice (GLP) and SOPs should extend to AI/ML.
  • Clinical trials: AI/ML used in the context of clinical trials must comply with ICH E6 guideline for good clinical practice (GCP). If a model is generated for clinical trial purposes, “the full model architecture, logs from modelling, validation and testing, training data and description of the data processing pipeline” are likely part of the clinical trial data or trial protocol dossier and so should be made available for assessment at the time of marketing authorization or clinical trial application. Risks involved in using AI/ML increase from early-phase to pivotal clinical trials.
  • Precision medicine: EMA considers the use of AI/ML in individualizing treatment (including “patient selection, dosing, de novo design of product variants”) as high-risk from a medicine regulation perspective, related to both patient risk and level of regulatory impact.
  • Product information: EMA recognizes AI may be used to draft, compile, translate or review information documents, but expects use of such technologies only under “close human supervision.”
  • Manufacturing: use of AI in this context must comply with relevant quality management principles.
  • Post-authorization phase: marketing authorization holders must “validate, monitor and document model performance and include AI/ML operations in the pharmacovigilance system, to mitigate risks related to all algorithms and models used” and use of AI in a post authorization study should be discussed within a regulatory procedure.

For more information about the Reflection Paper and the use of AI in the pharmaceutical space, please contact your DLA relationship partner or the authors of this alert.

Tags

artificial intelligence, regulation, healthtech