The International Coalition of Medicines Regulatory Authorities (“ICMRA”) has published a report on the use of artificial intelligence (“AI”) to develop medicines (the “AI Report”) that provides a series of recommendations on how regulators and stakeholders can address challenges posed by AI. The ICMRA notes that there are numerous opportunities to apply AI to medicines development, but that AI poses a number of challenges to existing regulatory frameworks. The AI Report discusses these opportunities and challenges in detail based on several case studies, and provides a set of recommendations for implementation by the ICMRA and its member authorities, which includes the European Medicines Agency (the “EMA”), the USA’s Food and Drug Administration, and the World Health Organisation. Based on the AI report, we expect to see an increased focus on adapting regulatory frameworks to deal with AI products going forwards both on an international and national level.
ICMRA and the AI Report
The ICMRA is an informal group of leaders of medicines regulatory authorities from around the world. The EMA notes it was set up to provide “strategic coordination, advocacy and leadership,” with a view to adapting regulatory frameworks to “facilitate safe and timely access to innovative medicines.” Through horizon scanning the ICMRA identified AI as an area that challenges existing regulatory frameworks.[1] Thus, members of the ICMRA[2] used case studies to “stress-test” their regulatory frameworks. The AI Report sets out the case studies and the ICMRA’s recommendations on how to adapt these frameworks to better cope with the challenges posed by AI.
What is Artificial Intelligence?
The AI Report explains that AI is a broad term to “encompass iterative, ‘learning’ algorithms that utilise (big) data and high computing power to make interpretations, predictions or decisions in an autonomous or semi-autonomous fashion that could be seen to imitate intelligent behaviour”, while noting that there is currently no single and agreed definition of AI. AI systems can be standalone software, or embedded in hardware devices. The AI Report states that “prevalent methods” used for AI systems include machine learning, deep learning, natural language processing and robotic process automation.
Use of AI in the Development of Medicines
The AI Report explains that those in the public and private sector are increasingly using AI in the development of medicinal products and across all stages of a medicine’s lifecycle. The AI Report identifies uses in: (i) target profile identification and validation; (ii) compound screening and lead identification; (iii) pre-clinical and clinical development (including “annotation and analysis of clinical data in trials”); (iv) clinical use optimisation; (v) regulatory applications (e.g. for dossier preparations); and (vi) post-marketing requirements (e.g. pharmacovigilance and adverse event reporting).
However, the AI Report also highlights important limitations with the use of AI (e.g., social bias leading to discrimination and/or misguided learning). Additionally, “self-learning” AI systems are associated with difficulty in predicting outputs or describing with sufficient transparency to users how the AI will function. This can pose risks to patients.
Stress-Testing Regulatory Systems
The ICMRA members ran two hypothetical case studies to stress test how their regulatory systems would cope with AI products/uses. These case studies involved:
(1) the use of AI in clinical medicine development (in this scenario using a Central Nervous System App or “CNS App”). The CNS App uses data from electronic health records to build upon existing gold-standard diagnostic tools based on a variety of neurological variables, e.g., speed, movement, memory, etc. It applies AI to identify associations between the variables, disease progression and treatment. A company would use it to select patients for clinical trials and to monitor their progression. Post-approval, the company would use the CNS App to monitor effectiveness, adherence and response; and
(2) use of AI for pharmacovigilance literature searches and signal management. A company would deploy machine learning methods based on an existing bibliographic and signal training dataset, in the hope that it would improve the sensitivity and accuracy of signal detection and literature searches.
The AI Report sets out the results of these case studies “elucidating the regulatory challenges and use classifications” by region or regulator. The results indicate that areas that might cause challenges/need consideration and these include:
- product classification (noting that there were limitations to the current EU medical device classification system);
- obtaining early advice from regulators;
- clinical development (noting that in the EU development of medical devices and medicinal products require different regulators);
- obtaining scientific advice from regulators (advising on the utility of the AI approach and validity of the AI-generated data);
- assessment for marketing authorisations (highlighting consideration needs to be given to hardware, firmware, software, governance of quality systems, data security and privacy, and data management of old and new data); and
- Post-approval (noting consideration needs to be given to the change management plans, hardware and firmware, software, governance and auditing of data sets, updates, and post-marketing risk surveillance and vigilance).
Recommendations
The AI Report sets out detailed recommendations for implementation by ICMRA and its members in Section 4. These recommendations cover “General Recommendations for AI,” “Recommendations related to case study AI in Medicine Development, Clinical Trials and Use – Central Nervous System App using AI,” and “Recommendations related to case study AI in pharmacovigilance.” These also include some specific “Recommendations for the EU.”
The EMA identify three of the main findings and recommendations in their news update on the AI Report as:
- “Regulators may need to apply a risk-based approach to assessing and regulating AI, which could be informed through exchange and collaboration in ICMRA;
- Sponsors, developers and pharmaceutical companies should establish strengthened governance structures to oversee algorithms and AI deployments that are closely linked to the benefit/risk of a medicinal product;
- Regulatory guidelines for AI development, validation and use with medicinal products should be developed in areas such as data provenance, reliability, transparency and understandability, pharmacovigilance, and real-world monitoring of patient functioning.” (emphasis added)
Most regulators and stakeholders were already aware of challenges posed by AI products and have struggled with the lack of AI-specific guidance available. As such, a number of the ICMRA members are already carrying out activities that concern AI (e.g., the European Commission recently published a legislative proposal for a Regulation on Artificial Intelligence—see our previous blog post here).
Following the AI Report, we expect an increased focus on developing regulatory frameworks (either by updating legislation or through developing guidance) to cater to the specific challenges posed by AI in the context of medicines development.
[1] The AI Report states that through horizon scanning members have identified three “challenging topics” so far. These are 3D printing, gene editing and AI.
[2] The following members of the ICMRA were involved in the AI Report: the the Italian Medicines Agency (AIFA), the Danish Medicines Agency (DKMA), the European Medicines Agency (EMA), the USA’s Food and Drug Administration (FDA), Health Canada (HC), the Irish Health Products Regulatory Authority (HPRA), Swissmedic and the World Health Organisation (WHO).