On May 21, 2024, the UK Medicines and Healthcare products Regulatory Agency (“MHRA”) published a statement of policy intent for UK recognition of international regulatory approvals of certain medical devices (the “Statement”).  The Statement follows the Government response to the 2021 consultation on the future regulation of medical devices in the UK that details an intention to introduce alternative routes to market for medical devices, such as utilizing approvals from other countries and Medical Device Single Audit Program (“MDSAP”) certificates, in addition to the current UK Conformity Assessed (“UKCA”) marking process.

The MHRA has already taken similar steps in the medicines space, adopting a new International Recognition Procedure (“IRP”) in January 2024.

In relation to devices, the Statement applies to certain medical devices placed on the market in Great Britain.  For relevant devices, the MHRA proposes to recognize foreign approvals from regulators in Australia, Canada, EU/EEA and USA (which is a smaller number of acceptable regulators than under the MHRA’s IRP for medicines).  The Statement expressly excludes a number of medical devices from international recognition, including software as a medical device (“SaMD”) (including AI as a medical device (“AIaMD”)) and companion diagnostic products approved via US 510(k) (a route which relies on equivalence to a predicate).

The proposed framework is a draft and the final version is expected to come into force in 2025 at the same time as future core regulations.  It also remains the government’s intention to introduce transitional arrangements for UKCA marked devices at the same time.

Continue Reading UK MHRA Announces Intention To Recognize Certain International Approvals For Certain Medical Devices

On Friday, April 26, 2024, the Federal Trade Commission (“FTC”) voted 3-2 to issue a final rule (the “final rule”) that expands the scope of the Health Breach Notification Rule (“HBNR”) to apply to health apps and similar technologies and broadens what constitutes a breach of security, among other updates.  We previously covered the proposed rule, which was issued on May 18, 2023.

Continue Reading FTC Issues Final Rule to Expand Scope of the Health Breach Notification Rule

On April 30, 2024, the UK Medicines and Healthcare products Regulatory Agency (“MHRA”) outlined its strategic approach (“Approach”) to artificial intelligence (“AI”).  The Approach is a response to the UK Government’s white paper: a pro-innovation approach to AI regulation and subsequent Secretary of State letter of 1 February 2024, and is the culmination of 12 months’ work by the MHRA to ensure the risks of AI are appropriately balanced with the potential transformative impact of AI in healthcare.

AI in Healthcare

AI has the potential to revolutionize the healthcare sector and improve health outcomes at every stage of healthcare provision – from preventative care through to diagnosis and treatment.  AI can help in research and development by strengthening outcomes of clinical trials, as well as being used to improve the clinical care of patients by personalizing care, improving diagnosis and treatment, enhancing the delivery of care and health system efficiency, and supplementing healthcare professionals’ knowledge, skills and competencies. 

Continue Reading MHRA Outlines New Strategic Approach to Artificial Intelligence

In March 2024, the EU lawmakers reached agreement on the European Health Data Space (EHDS).  Although the text has not yet been formally adopted by all the European institutions, a number of interesting points can already be highlighted.  This article focuses on the governance and enforcement of the EHDS; for an overview of the EHDS generally, see our first post in this series.

The final text of the EHDS was adopted by the European Parliament on 24 April 2024 and is expected to be formally adopted by the European Council in the coming months.

Continue Reading EHDS Series – 5: European Health Data Space Governance, Enforcement and Timelines

On April 22, 2024, the European Federation of Pharmaceutical Industries and Associations (“EFPIA”) issued a statement on the application of the AI Act in the medicinal product lifecycle. The EFPIA statement highlights that AI applications are likely to play an increasing role in the development and manufacture of medicines.  As drug development is already governed by a longstanding and detailed EU regulatory framework, EFPIA stresses that care should be taken to ensure that any rules on the use of AI are fit-for-purpose, adequately tailored, risk-based, and do not duplicate existing rules.  The statement sets forth five “considerations”:

Continue Reading EFPIA Issues Statement on Application of the AI Act in the Medicinal Product Lifecycle

In early March 2024, the EU lawmakers reached agreement on the European Health Data Space (EHDS).  For now, we only have a work-in-progress draft version of the text, but a number of interesting points can already be highlighted. This article focuses on the implications for “wellness applications” and medical devices; for an overview of the EHDS generally, see our first post in this series.

The final text of the EHDS was adopted by the European Parliament on 24 April 2024 and is expected to be formally adopted by the European Council in the coming months.

Continue Reading EHDS Series – 4: The European Health Data Space’s Implications for “Wellness Applications” and Medical Devices

In early March 2024, the EU lawmakers reached agreement on the European Health Data Space (EHDS).  For now, we only have a work-in-progress draft version of the text, but a number of interesting points can already be highlighted.  This article focusses on the obligations of data users; for an overview of the EHDS generally, see our first post in this series.

We expect the final text of the EHDS to be adopted by the European Parliament in April 2024 and by the EU Member States shortly thereafter.

Continue Reading EHDS Series – 3: The European Health Data Space from the Health Data User’s Perspective

In early March 2024, the EU lawmakers reached agreement on the European Health Data Space (EHDS).  For now, we only have a work-in-progress draft version of the text, but a number of interesting points can already be highlighted.  This article focusses on the obligations of data holders; for an overview of the EHDS generally, see our first post in this series.

We expect the final text of the EHDS to be adopted by the European Parliament in April 2024 and by the EU Member States shortly thereafter.

Continue Reading EHDS Series – 2: The European Health Data Space from the Health Data Holder’s Perspective

In early March 2024, the EU lawmakers reached agreement on the European Health Data Space (EHDS).  For now, we only have a work-in-progress draft version of the text, but a number of interesting points can already be highlighted.

We expect the final text of the EHDS to be adopted by the European Parliament in April 2024 and by the EU Member States shortly thereafter.

Continue Reading EHDS Series – 1: Five Key Take Aways on Secondary Use of Health Data

On March 15, 2024, FDA’s medical product centers – CBER, CDER, and CDRH – along with the Office of Combination Products (OCP) published a paper outlining their key areas of focus for the development and use of artificial intelligence (AI) across the medical product life cycle.  The paper, entitled “Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together,” is intended by the Agency to “provide greater transparency regarding how FDA’s medical product Centers are collaborating to safeguard public health while fostering responsible and ethical innovation.”  The FDA paper is the latest in series of informal statements from the Agency about the use of AI in the discovery, development, manufacturing, and commercialization of medical products, as well as for medical devices that incorporate AI.  Here are five key takeaways from FDA’s recent paper.

  1. The Centers continue to emphasize a risk-based regulatory framework for AI that builds upon existing FDA initiatives.

Consistent with FDA’s longstanding approach to regulation of medical products, FDA’s paper recognizes the value of a risk-based approach for regulating AI that the Agency oversees.  The paper highlights how “AI management requires a risk-based regulatory framework built on robust principles, standards, best practices, and state-of-the-art regulatory science tools that can be applied across AI applications and be tailored to the relevant medical product” and, to the extent feasible, “can be applied across various medical products and uses within the health care delivery system.”

As part of this risk-based approach, the Centers also plan to leverage and continue building upon existing FDA initiatives for the evaluation and regulation of AI used in medical products, including FDA’s May 2023 Discussion Paper on Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products, the Center for Drug Evaluation and Research (CDER) Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative, and the Center for Devices & Radiological Health (CDRH) January 2021 AI/ML-Based Software as a Medical Device (SaMD) Action Plan.   

  1. FDA plans to release several AI guidance documents this year, providing an opportunity for engagement.

The paper notes that the Centers intend to develop policies that provide regulatory predictability and clarity for the use of AI, while also supporting innovation.  Planned FDA guidance documents include:

  • Draft guidance on life cycle management considerations and premarket submission recommendations for AI-enabled device software functions.  As background, in June 2023, FDA released a final guidance entitled “Content of Premarket Submissions for Device Software Functions.”  The title of the proposed draft guidance on CDRH’s guidance agenda suggests that the Agency’s premarket submission recommendations may differ for AI-enabled device software functions, and it is likely that the new draft guidance will directly address novel premarket submission issues raised by incorporating AI into device software functions.
  • Draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biological products.  The title of this planned draft guidance is similar to FDA’s August 2023 final guidance entitled “Considerations for the Use of Real-World Data and Real-World Evidence to Support Regulatory Decision-Making for Drug and Biological Products,” which focused on RWD/E and did not discuss AI.  The planned draft guidance on CDER’s guidance agenda may provide additional insights on the use of AI in RWE studies.  FDA also has previously given attention to internal infrastructure needed to assess regulatory submissions that include data from Digital Health Technologies (DHTs).  For example, in March 2023 the Agency issued a Framework for the Use of DHTs in Drug and Biological Product Development that stated FDA plans to “enhance its IT capabilities to support the review of DHT-generated data,” including by establishing “a secure cloud technology to enhance its infrastructure and analytics environment that will enable FDA to effectively receive, aggregate, store, and process large volumes of data.”  The new proposed draft guidance could build upon the themes outlined in this framework, with a specific focus on AI.
  • Final guidance on marking submission recommendations for predetermined change control plans for AI-enabled medical device software functions.  FDA plans to finalize the Agency’s April 2023 draft guidance on predetermined change control plans (PCCPs).  PCCPs describe planned changes that may be made to a device that otherwise would require premarket review by the Agency, facilitating iterative improvements through modifications to an AI- or machine learning-enabled device while continuing to provide a reasonable assurance of device safety and effectiveness.  The final guidance likely will incorporate or address any feedback the Agency has received on the draft guidance and may also address real-world challenges the Agency has faced or “lessons learned” from reviewing submitted PCCPs to date.

The publication of these guidance documents will open the door for public comments and additional engagement opportunities, and life sciences and medical device companies should consider submitting comments. 

  1. Mitigating bias continues to be a front-burner issue.

Mitigating bias and discrimination continues to be top-of-mind at FDA.  The paper highlights several demonstration projects and initiatives the Centers plan to support in an effort to identify and reduce the risk of biases in AI tools, including:

  • Regulatory science efforts to develop methodology for evaluating AI algorithms, identifying and mitigating bias, and ensuring the robustness and resilience of AI algorithms to withstand changing clinical inputs and conditions.
  • Demonstration projects that (1) highlight different points where bias can be introduced in the AI development life cycle and how it can be addressed, including through risk management; and (2) consider health inequities associated with the use of AI in medical product development to promote equity and ensure data representativeness, leveraging ongoing diversity, equity, and inclusion efforts.
  • Best practices for documenting and ensuring that data used to train and test AI models are fit for use, including adequately representing the target population.
  • Considerations for evaluating the safe, responsible, and ethical use of AI in the medical product life cycle.

These actions align with the Agency’s overarching efforts to develop methodologies for identification and elimination of bias, as well as President Biden’s October 2023 AI Executive Order that called for federal guidance and resources on the incorporation of equity principles in AI-enabled technologies used in the health sector, the use of disaggregated data on affected populations and representative population data sets when developing new models, and the monitoring of algorithmic performance against discrimination and bias.

  1. The paper focuses on the total product life cycle.

The Centers intend to support various projects and initiatives centered around performance monitoring and ensuring reliability throughout the total product life cycle.  Specifically, the Centers intend to support:

  • Demonstration projects that support the ongoing monitoring of AI tools to ensure adherence to standards and that the tools maintain performance and reliability throughout their life cycle. 
  • A framework and strategy for quality assurance of AI-enabled tools or systems used in the medical product life cycle, which emphasize continued monitoring and mitigation of risks. 
  • Best practices for long-term safety and real-world performance monitoring of AI-enabled medical products.
  • Educational initiatives for regulatory bodies, health care professional, patients, researchers, and industry as they navigate the safe and responsible use of AI in medical product development and in medical products.

Real-world performance monitoring and ensuring quality throughout the total product life cycle has been a hot topic for some time.  For example, President Biden’s AI Executive Order directed the formation of an AI Task Force to, in part, identify guidance and resources on long-term and real-world performance monitoring of AI technologies in the health sector, including “clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers, and users.”  Stakeholders have asked FDA for clarity on best practices for real-world performance monitoring for AI/ML-based software in the past, and FDA’s 2021 AI Action Plan stated that the Agency would support the piloting of real-world performance monitoring by working with stakeholders on a voluntary basis and developing frameworks for gathering and utilizing real-world performance metrics as well as thresholds and performance evaluations for the metrics.  Additionally, FDA’s May 2023 AI Discussion Paper expressed the importance of evaluating AI/ML models over time to consider the model risk and credibility, and solicited feedback on examples of best practices being used by stakeholders to monitor AI/ML models.  FDA’s collaborations with stakeholders on these efforts over the past years could inform future guidance.

  1. The paper emphasizes the importance of collaboration and international harmonization.

The paper highlights the importance of the Centers’ current collaboration with a variety of stakeholders, including developers, patient groups, academia, and global regulators, in cultivating a patient-centered regulatory approach that emphasizes collaboration and health equity.  The paper notes the Centers’ intent to continue fostering these collaborative partnerships, including by continuing to solicit input from interested parties on “critical aspects” of the use of AI in medical products such as transparency, explainability, governance, bias, cybersecurity, and quality assurance. 

Perhaps in an effort to facilitate collaboration with various stakeholders, the Director of FDA’s Digital Health Center of Excellence, Troy Tazbaz, recently joined the Board of Directors for the Coalition for Health AI.  He joins Micky Tripathi, National Coordinator for Health Information Technology within the Department of Health and Human Services (HHS), and several other representatives from academia, industry, and medical centers.  Tazbaz and Tripathi also will serve on CHAI’s “Government Advisory Board” along with Melanie Fontes Rainer, director of the Office for Civil Rights within HHS, and several other representatives from the White House Office of Science and Technology Policy, the Centers for Disease Control and Prevention, the Centers for Medicare & Medicaid Services, the Veterans Health Administration, and the Advanced Research Projects Agency for Health.

The paper also notes the Centers’ intention to continue to work closely with global collaborators to “promote international cooperation on standards, guidelines, and best practices to encourage consistency and convergence in the use and evaluation of AI across the medical product landscape.”  FDA has collaborated with Health Canada and UK’s MHRA in the past to develop guiding principles for Good Machine Learning Practices and PCCPs for machine learning-enabled medical devices.  Also, recently, FDA took a step toward international harmonization by issuing a proposed rule to amend the Quality System Regulation to incorporate by reference international standard ISO 13485.  These actions indicate that regulators are working towards a united front through close alignment on best practices and standards.

Looking Ahead

We expect to see many more policies, frameworks, guidance documents, and initiatives centered around AI in the coming months.  It remains to be seen, however, how FDA’s approach to AI will intersect with broader efforts to regulate AI.  For example, emerging proposals to regulate AI could potentially apply to AI that also is regulated by FDA, but few address the overlap with FDA’s existing medical product authorities.  For instance, some proposals focus on types of AI technologies (e.g., requirements to label all content generated by generative AI regardless of the intended use), whereas other approaches take a sector-specific approach and recognize that FDA’s existing regulatory frameworks already govern certain uses of AI (e.g., Senator Cassidy’s white paper on the deployment of AI in healthcare settings, which disfavored a one-size-fits-all approach to AI regulation and instead called for the leveraging of existing frameworks). 

But even sector-specific approaches may result in regulatory requirements that overlap with FDA requirements for FDA-regulated AI.  For example, in January 2024, HHS’s ONC published a final rule revising the certification requirements for health IT developers, which included requirements for AI-based “predictive decision support interventions” enabled by or interfacing with health IT.  Many predictive decision support interventions under the ONC final rule may also be FDA-regulated medical devices.  While ONC stated that it collaborated with FDA to maximize alignment, ultimately, developers of medical device software that also is a predictive decision support intervention will need to assess compliance with both FDA’s and ONC’s requirements.

In short, it will be critical to monitor developments and craft engagement strategies as policy-makers continue to collaborate and draw new lines around AI regulation.