Last week I published an article on the Inside Medical Devices Blog, discussing eight data security principles that companies participating in the Precision Medicine Initiative should aim to meet.  The Administration’s guidance document additionally recommends a basic framework that organizations collecting, storing, and sharing patient information should adopt as current best practices.  The full post can be read here.

Yesterday, the Senate Health, Education, Labor and Pensions (HELP) Committee held a final mark-up of legislation comprising the Committee’s counterpart to the House-passed 21st Century Cures Act.  The HELP Committee approved five bills including S. 2713 to advance the development of “precision medicine” through research and data sharing.  We have reported on the President’s precision medicine initiative in other posts available hereContinue Reading Senate HELP Committee Marks Up Precision Medicine, Other “Cures” Bills

In a 107-page report, released last week, the White House set forth its plan to create and manage a database containing 1 million or more Americans’ medical records in furtherance of the Precision Medicine Initiative. As announced by President Obama during his 2015 State of the Union Address, the Precision Medicine Initiative was launched “to bring us closer to curing diseases like cancer and diabetes, and give all of us access to the personalized information we need to keep ourselves and our families healthier.”

To achieve this goal, the Precision Medicine Initiative Cohort Program (“PMI-CP”) will build a “research cohort” of at least 1 million volunteers, which will provide the “platform” for improved understanding and knowledge of precision medicine approaches. As explained in the report, “[p]recision medicine is an approach to disease treatment and prevention that seeks to maximize effectiveness by taking into account individual variability in genes, environment, and lifestyle.” Continue Reading Report Outlines Plan for Precision Medicine Database

As we described in an a post last month, in the State of the Union the President announced a new “Precision Medicine Initiative” to foster the development and delivery of personalized treatments for illnesses such as cancer and diabetes.  The FY 2016 budget released by the President yesterday includes $215 million in funding to launch the Precision Medicine initiative.

Continue Reading President’s Budget Includes Funding for Precision Medicine Initiative

This year’s State of the Union address included little explicit discussion of health IT, but did highlight two areas that could have significant implications for health IT: the President announced a new “Precision Medicine Initiative” and also urged Congress to pass legislation to combat cyber attacks and prevent identity theft.

While the address was scant on detail, the President’s Precision Medicine Initiative appears intended to foster the development and delivery of personalized treatments for illnesses such as cancer and diabetes.  In a blog post late Wednesday afternoon, the White House noted that precision medicine “takes into account individual differences in people’s genes, environments, and lifestyles, making it possible to design highly effective, targeted treatments for cancer and other diseases.”  Echoing the President’s speech, the post cited a new treatment for cystic fibrosis (CF) that may dramatically extend longevity for those CF patients carrying a rare genetic mutation.  Similarly, precision medicine may allow physicians to optimize treatments for cancer and other illnesses based on a patient’s genetic traits.  Health information technology may be key in understanding and applying factors that can be used to tailor such treatments.

The President also called on Congress to pass cybersecurity legislation to guard against cyber attacks and protect privacy.  The President began to release details of the proposal last week; the proposed legislation would increase information sharing both within the private sector and between government and the private sector; expand criminal penalties and law enforcement authority related to cyber attacks and identity theft; and impose a single, federal data breach reporting standard.  Cybersecurity has become an increasing concern for health care providers though stakeholders appear divided over the need for additional legislation.

Hot on the heels of recent announcements from the U.S. Food and Drug Administration (see our prior blogs here), the European Medicines Agency (“EMA”) has joined the conversation on the use of Artificial Intelligence (“AI”) and Machine Learning (“ML”) technologies in the medicinal product lifecycle.

AI and ML have the potential to enhance every stage of the medicinal product lifecycle, from drug discovery, through to clinical development, manufacturing and post-market pharmacovigilance.  These technologies can display intelligent behaviour and can analyse huge amounts of data.  They are also extremely flexible as they can be trained using data, rather than explicit programming.  When used correctly, AI and ML can “effectively support the acquisition, transformation, analysis, and interpretation of data within the medicinal product lifecycle.”

However, the nature of these technologies also leads to certain risks.  Importantly, there can be a lack of transparency in the models.  Also, the data-driven approach means they can be prone to bias.  The EMA has therefore published a draft “Reflection paper on use of Artificial Intelligence (AI) in medicinal product lifecycle” (the “Draft Reflection Paper”), which is open to consultation until 31 December 2023.  The EMA sees the Draft Reflection Paper as a way to open “a dialogue with developers, academics, and other regulators.” 

What does the Draft Reflection Paper cover?

The Draft Reflection Paper sets out the EMA’s current thinking on the use of AI to “support the safe and effective development, regulation and use of … medicines.”  It applies primarily to human medicines, noting that while similar principles apply to veterinary medicines specific reflections/guidance are needed for the veterinary space. 

The purpose of the Draft Reflection Paper is to identify use of AI/ML that fall within the EMA’s/National Competent Authorities’ remit.  This obviously includes the use of AI in the medicinal product lifecycle but also extends to the use of medical devices with AI/ML technology that are used to generate evidence to support an EU marketing authorisation (i.e., used within the context of clinical trials or combined with the use of a medicinal product). 

Use of AI/ML in the medicines lifecycle

The EMA highlights as a “key principle” that marketing authorisation applicants (“Applicants”) and marketing authorisation holders (“MAH”) will bear responsibility for ensuring AI/ML they use is “fit for purpose and are in line with ethical, technical, scientific, and regulatory standards as described in GxP standards and current EMA scientific guidelines.” 

In summary, the Draft Reflection Paper requires that Applicants take a “risk-based approach for development, deployment and performance monitoring of AI and ML tools.”  The degree of risk will be determined by a number of factors, including: the AI technology itself; the context of use; the degree of influence of the AI/ML technology; and the stage of lifecycle of the medicinal product. 

The Draft Reflection Paper considers use of AI/ML at different stages along the product lifecycle and sets out principles and an indication of risk of applying AI/ML at each such stage:

  • Drug discovery — the EMA acknowledges that the use of AI/ML in drug discovery may be low risk from a regulatory perspective, “as the risk of non-optimal performance often mainly affects the sponsor.”  However, if results contribute to the total body of evidence presented for regulatory review then the regulatory risk increases.
  • Non-clinical development — AI/ML (e.g,AI/ML modelling approaches to replace, reduce, and refine the use of animals”) should follow Good Laboratory Practice (“GLP”), where applicable.  Applicants should consider Application of GLP Principles to Computerised Systems and GLP Data Integrity and their SOPs should cover AI/ML.
  • Clinical trials — AI/ML models (for example, that support selection of patients based on disease characteristics or clinical parameters) must comply with ICH GCP.  The regulatory risk for use of AI/ML increases from early stage to pivotal clinical trials.  Where models are generated for clinical trials, it is likely they will be considered part of the clinical trial data or trial protocol dossier and the models must be made available for regulators to assess at the time of marketing authorisation or clinical trial application. Where data collected/generated with AI/ML may impact the regulatory assessment of a medicine, the EMA recommends early regulatory interaction.
  • Precision medicine — the EMA considers the use of AI/ML in individualizing treatment (e.g., patient selection, dosing, de novo design of product variants) as high-risk from a medicines regulation perspective. The EMA recommends “special care … in defining what constitutes a change inposology (requiring a regulatory evaluation before implementation), to provide guidance that the prescribers can critically apprehend, and include fall-back treatment strategies in cases of technical failure.
  • Product information — AI/ML might be used to draft, compile, translate or review information documents.  Recognizing the risk of hallucinations (which may be plausible but erroneous output) by generative language models, the EMA expects use of such technologies only under “close human supervision.
  • Manufacturing — use of AI/ML in drug manufacturing is expected to increase in the future and the EMA notes that this must comply with relevant quality management principles.
  • Post-authorization phase — AI/ML is likely to have potential to support post-authorization safety and efficacy studies in human medicines, plus pharmacovigilance activities, such as adverse event report management and signal detection.  The MAH must “validate, monitor and document model performance and include AI/ML operations in the pharmacovigilance system, to mitigate risks related to all algorithms and models used.

Considerations for use of AI/ML

The Draft Reflection Paper sets out detailed measures that Applicants can take when using AI/ML technologies.  Some key points include:

  • Interacting with regulators: Applicants should carry out a regulatory impact and risk analysis.  The higher the regulatory impact or risk associated with the use of AI/ML technologies, the sooner the EMA recommends the Applicant engages with regulators to seek scientific advice.
  • Technical considerations:
    • Data acquisition: Applicants should use all efforts and active measures to avoid integration of bias in AI/ML and should document the source of data and the process of acquisition in a traceable manner in line with GxP. 
    • Training, validation and test data: the EMA discusses validation of models, which is importantly different from the concept of validation in the field of medicines.
    • Model development: the EMA encourages development and use of generalizable and robust models.
    • Performance Assessments: the Paper highlights the importance of selecting the correct metrics for performance assessments.
    • Interpretability and explainability: although transparent models are preferred, the EMA states that a “black box” model may be acceptable if developers can substantiate why transparent models are unsatisfactory.  The EMA encourages use of methods within the field of explainable AI wherever possible.
    • Model deployment: a risk-based approach is required for model deployment.
  • Ethical Principles: developers should follow basic ethical principles defined in the guidelines for trustworthy AI and presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI). They should also take a “human-centric” approach to all development and deployment of AI/ML.
  • Governance, Data Protection and Integrity: Applicants and MAHs also need to consider and reflect governance, data protection and integrity principles.

Next Steps

The EMA will finalize the Draft Reflection Paper following the end of the consultation period.  It also intends to provide additional guidance on risk-management and may update existing guidance to take into account the specific issues that AI/ML pose.   

Given that the Draft Reflection Paper puts the onus on Applicants and MAHs to ensure the algorithms, models, datasets etc. they use are compliant, biopharma companies considering the use of AI/ML should watch this space and keep up to date with upcoming developments. 

Bloomberg BNA reports (subscription required) that FDA will recognize a regulatory category called “complementary diagnostics” for tests that provide additional information about how a drug might be used, but that are distinct from “companion diagnostics,” which are essential for the safe and effective use of a drug. Continue Reading FDA to Focus on “Complementary” Diagnostics in Addition to Companion Diagnostics