On March 15, 2024, FDA’s medical product centers – CBER, CDER, and CDRH – along with the Office of Combination Products (OCP) published a paper outlining their key areas of focus for the development and use of artificial intelligence (AI) across the medical product life cycle.  The paper, entitled “Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together,” is intended by the Agency to “provide greater transparency regarding how FDA’s medical product Centers are collaborating to safeguard public health while fostering responsible and ethical innovation.”  The FDA paper is the latest in series of informal statements from the Agency about the use of AI in the discovery, development, manufacturing, and commercialization of medical products, as well as for medical devices that incorporate AI.  Here are five key takeaways from FDA’s recent paper.

  1. The Centers continue to emphasize a risk-based regulatory framework for AI that builds upon existing FDA initiatives.

Consistent with FDA’s longstanding approach to regulation of medical products, FDA’s paper recognizes the value of a risk-based approach for regulating AI that the Agency oversees.  The paper highlights how “AI management requires a risk-based regulatory framework built on robust principles, standards, best practices, and state-of-the-art regulatory science tools that can be applied across AI applications and be tailored to the relevant medical product” and, to the extent feasible, “can be applied across various medical products and uses within the health care delivery system.”

As part of this risk-based approach, the Centers also plan to leverage and continue building upon existing FDA initiatives for the evaluation and regulation of AI used in medical products, including FDA’s May 2023 Discussion Paper on Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products, the Center for Drug Evaluation and Research (CDER) Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative, and the Center for Devices & Radiological Health (CDRH) January 2021 AI/ML-Based Software as a Medical Device (SaMD) Action Plan.   

  1. FDA plans to release several AI guidance documents this year, providing an opportunity for engagement.

The paper notes that the Centers intend to develop policies that provide regulatory predictability and clarity for the use of AI, while also supporting innovation.  Planned FDA guidance documents include:

  • Draft guidance on life cycle management considerations and premarket submission recommendations for AI-enabled device software functions.  As background, in June 2023, FDA released a final guidance entitled “Content of Premarket Submissions for Device Software Functions.”  The title of the proposed draft guidance on CDRH’s guidance agenda suggests that the Agency’s premarket submission recommendations may differ for AI-enabled device software functions, and it is likely that the new draft guidance will directly address novel premarket submission issues raised by incorporating AI into device software functions.
  • Draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biological products.  The title of this planned draft guidance is similar to FDA’s August 2023 final guidance entitled “Considerations for the Use of Real-World Data and Real-World Evidence to Support Regulatory Decision-Making for Drug and Biological Products,” which focused on RWD/E and did not discuss AI.  The planned draft guidance on CDER’s guidance agenda may provide additional insights on the use of AI in RWE studies.  FDA also has previously given attention to internal infrastructure needed to assess regulatory submissions that include data from Digital Health Technologies (DHTs).  For example, in March 2023 the Agency issued a Framework for the Use of DHTs in Drug and Biological Product Development that stated FDA plans to “enhance its IT capabilities to support the review of DHT-generated data,” including by establishing “a secure cloud technology to enhance its infrastructure and analytics environment that will enable FDA to effectively receive, aggregate, store, and process large volumes of data.”  The new proposed draft guidance could build upon the themes outlined in this framework, with a specific focus on AI.
  • Final guidance on marking submission recommendations for predetermined change control plans for AI-enabled medical device software functions.  FDA plans to finalize the Agency’s April 2023 draft guidance on predetermined change control plans (PCCPs).  PCCPs describe planned changes that may be made to a device that otherwise would require premarket review by the Agency, facilitating iterative improvements through modifications to an AI- or machine learning-enabled device while continuing to provide a reasonable assurance of device safety and effectiveness.  The final guidance likely will incorporate or address any feedback the Agency has received on the draft guidance and may also address real-world challenges the Agency has faced or “lessons learned” from reviewing submitted PCCPs to date.

The publication of these guidance documents will open the door for public comments and additional engagement opportunities, and life sciences and medical device companies should consider submitting comments. 

  1. Mitigating bias continues to be a front-burner issue.

Mitigating bias and discrimination continues to be top-of-mind at FDA.  The paper highlights several demonstration projects and initiatives the Centers plan to support in an effort to identify and reduce the risk of biases in AI tools, including:

  • Regulatory science efforts to develop methodology for evaluating AI algorithms, identifying and mitigating bias, and ensuring the robustness and resilience of AI algorithms to withstand changing clinical inputs and conditions.
  • Demonstration projects that (1) highlight different points where bias can be introduced in the AI development life cycle and how it can be addressed, including through risk management; and (2) consider health inequities associated with the use of AI in medical product development to promote equity and ensure data representativeness, leveraging ongoing diversity, equity, and inclusion efforts.
  • Best practices for documenting and ensuring that data used to train and test AI models are fit for use, including adequately representing the target population.
  • Considerations for evaluating the safe, responsible, and ethical use of AI in the medical product life cycle.

These actions align with the Agency’s overarching efforts to develop methodologies for identification and elimination of bias, as well as President Biden’s October 2023 AI Executive Order that called for federal guidance and resources on the incorporation of equity principles in AI-enabled technologies used in the health sector, the use of disaggregated data on affected populations and representative population data sets when developing new models, and the monitoring of algorithmic performance against discrimination and bias.

  1. The paper focuses on the total product life cycle.

The Centers intend to support various projects and initiatives centered around performance monitoring and ensuring reliability throughout the total product life cycle.  Specifically, the Centers intend to support:

  • Demonstration projects that support the ongoing monitoring of AI tools to ensure adherence to standards and that the tools maintain performance and reliability throughout their life cycle. 
  • A framework and strategy for quality assurance of AI-enabled tools or systems used in the medical product life cycle, which emphasize continued monitoring and mitigation of risks. 
  • Best practices for long-term safety and real-world performance monitoring of AI-enabled medical products.
  • Educational initiatives for regulatory bodies, health care professional, patients, researchers, and industry as they navigate the safe and responsible use of AI in medical product development and in medical products.

Real-world performance monitoring and ensuring quality throughout the total product life cycle has been a hot topic for some time.  For example, President Biden’s AI Executive Order directed the formation of an AI Task Force to, in part, identify guidance and resources on long-term and real-world performance monitoring of AI technologies in the health sector, including “clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers, and users.”  Stakeholders have asked FDA for clarity on best practices for real-world performance monitoring for AI/ML-based software in the past, and FDA’s 2021 AI Action Plan stated that the Agency would support the piloting of real-world performance monitoring by working with stakeholders on a voluntary basis and developing frameworks for gathering and utilizing real-world performance metrics as well as thresholds and performance evaluations for the metrics.  Additionally, FDA’s May 2023 AI Discussion Paper expressed the importance of evaluating AI/ML models over time to consider the model risk and credibility, and solicited feedback on examples of best practices being used by stakeholders to monitor AI/ML models.  FDA’s collaborations with stakeholders on these efforts over the past years could inform future guidance.

  1. The paper emphasizes the importance of collaboration and international harmonization.

The paper highlights the importance of the Centers’ current collaboration with a variety of stakeholders, including developers, patient groups, academia, and global regulators, in cultivating a patient-centered regulatory approach that emphasizes collaboration and health equity.  The paper notes the Centers’ intent to continue fostering these collaborative partnerships, including by continuing to solicit input from interested parties on “critical aspects” of the use of AI in medical products such as transparency, explainability, governance, bias, cybersecurity, and quality assurance. 

Perhaps in an effort to facilitate collaboration with various stakeholders, the Director of FDA’s Digital Health Center of Excellence, Troy Tazbaz, recently joined the Board of Directors for the Coalition for Health AI.  He joins Micky Tripathi, National Coordinator for Health Information Technology within the Department of Health and Human Services (HHS), and several other representatives from academia, industry, and medical centers.  Tazbaz and Tripathi also will serve on CHAI’s “Government Advisory Board” along with Melanie Fontes Rainer, director of the Office for Civil Rights within HHS, and several other representatives from the White House Office of Science and Technology Policy, the Centers for Disease Control and Prevention, the Centers for Medicare & Medicaid Services, the Veterans Health Administration, and the Advanced Research Projects Agency for Health.

The paper also notes the Centers’ intention to continue to work closely with global collaborators to “promote international cooperation on standards, guidelines, and best practices to encourage consistency and convergence in the use and evaluation of AI across the medical product landscape.”  FDA has collaborated with Health Canada and UK’s MHRA in the past to develop guiding principles for Good Machine Learning Practices and PCCPs for machine learning-enabled medical devices.  Also, recently, FDA took a step toward international harmonization by issuing a proposed rule to amend the Quality System Regulation to incorporate by reference international standard ISO 13485.  These actions indicate that regulators are working towards a united front through close alignment on best practices and standards.

Looking Ahead

We expect to see many more policies, frameworks, guidance documents, and initiatives centered around AI in the coming months.  It remains to be seen, however, how FDA’s approach to AI will intersect with broader efforts to regulate AI.  For example, emerging proposals to regulate AI could potentially apply to AI that also is regulated by FDA, but few address the overlap with FDA’s existing medical product authorities.  For instance, some proposals focus on types of AI technologies (e.g., requirements to label all content generated by generative AI regardless of the intended use), whereas other approaches take a sector-specific approach and recognize that FDA’s existing regulatory frameworks already govern certain uses of AI (e.g., Senator Cassidy’s white paper on the deployment of AI in healthcare settings, which disfavored a one-size-fits-all approach to AI regulation and instead called for the leveraging of existing frameworks). 

But even sector-specific approaches may result in regulatory requirements that overlap with FDA requirements for FDA-regulated AI.  For example, in January 2024, HHS’s ONC published a final rule revising the certification requirements for health IT developers, which included requirements for AI-based “predictive decision support interventions” enabled by or interfacing with health IT.  Many predictive decision support interventions under the ONC final rule may also be FDA-regulated medical devices.  While ONC stated that it collaborated with FDA to maximize alignment, ultimately, developers of medical device software that also is a predictive decision support intervention will need to assess compliance with both FDA’s and ONC’s requirements.

In short, it will be critical to monitor developments and craft engagement strategies as policy-makers continue to collaborate and draw new lines around AI regulation.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Olivia Dworkin Olivia Dworkin

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance.

With a focus on cutting-edge…

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance.

With a focus on cutting-edge medical technologies and digital health products and services, Olivia regularly helps new and established companies navigate a variety of state and federal regulatory, legislative, and compliance matters throughout the total product lifecycle. She has experience counseling clients on the development, FDA regulatory classification, and commercialization of digital health tools, including clinical decision support software, mobile medical applications, general wellness products, medical device data systems, administrative support software, and products that incorporate artificial intelligence, machine learning, and other emerging technologies.

Olivia also assists clients in advocating for legislative and regulatory policies that will support innovation and the safe deployment of digital health tools, including by drafting comments on proposed legislation, frameworks, whitepapers, and guidance documents. Olivia keeps close to the evolving regulatory landscape and is a frequent contributor to Covington’s Digital Health blog. Her work also has been featured in the Journal of Robotics, Artificial Intelligence & Law, Law360, and the Michigan Journal of Law and Mobility.

Prior to joining Covington, Olivia was a fellow at the University of Michigan Veterans Legal Clinic, where she gained valuable experience as the lead attorney successfully representing clients at case evaluations, mediations, and motion hearings. At Michigan Law, Olivia served as Online Editor of the Michigan Journal of Gender and Law, president of the Trial Advocacy Society, and president of the Michigan Law Mock Trial Team. She excelled in national mock trial competitions, earning two Medals for Excellence in Advocacy from the American College of Trial Lawyers and being selected as one of the top sixteen advocates in the country for an elite, invitation-only mock trial tournament.

Photo of Christina Kuhn Christina Kuhn

Christina Kuhn advises medical device, pharmaceutical, and biotech companies on a broad range of FDA regulatory strategy and compliance matters. She has experience with cutting-edge and complex medical technologies, including software and digital health products, oncology products, next-generation sequencing, diagnostics, and combination products.…

Christina Kuhn advises medical device, pharmaceutical, and biotech companies on a broad range of FDA regulatory strategy and compliance matters. She has experience with cutting-edge and complex medical technologies, including software and digital health products, oncology products, next-generation sequencing, diagnostics, and combination products.

Christina frequently helps multinational device manufacturers as well as start-up device companies navigate the premarket regulatory process, advising companies on regulatory classification, clinical development strategy, and agency interactions. She also has significant experience counseling medical device companies on postmarket compliance requirements, including those related to advertising and promotion, quality systems and manufacturing, medical device reporting, registration and listing, and recalls. She advises clients on responding to and resolving enforcement actions, such as FDA inspections and Warning Letters as well as Department of Justice investigations.

Christina advises clients on, and performs regulatory due diligence for, corporate transactions, including acquisitions, public offerings, co-development agreements, and clinical trial agreements.

Christina also regularly assists industry associations and medical device and pharmaceutical companies in commenting on FDA guidance documents and rulemaking as well as drafting and analyzing federal legislation.

Christina is a frequent contributor to Covington’s Digital Health and InsideMedicalDevices blogs.

Photo of Wade Ackerman Wade Ackerman

Wade Ackerman advises companies and trade associations on complex and novel FDA regulatory issues that require coordinated legal, regulatory, and public policy strategies.

Through more than 19 years of experience in private practice and positions within the FDA and on Capitol Hill, Wade…

Wade Ackerman advises companies and trade associations on complex and novel FDA regulatory issues that require coordinated legal, regulatory, and public policy strategies.

Through more than 19 years of experience in private practice and positions within the FDA and on Capitol Hill, Wade has acquired unique insights into the evolving legal and regulatory landscape facing companies marketing FDA-regulated products. He co-leads Covington’s multidisciplinary Digital Health Initiative, which brings together the firm’s considerable global resources to advise life sciences and health technology clients harnessing the power of information technology and data to create new and cutting-edge innovations to improve health and achieve better outcomes for patients.

Until June 2016, Wade served as Senior FDA Counsel to the U.S. Senate Health Education, Labor & Pensions (HELP) Committee Ranking Member Patty Murray (D-WA) and, prior to that, Chairman Tom Harkin (D-IA). While at the HELP Committee, Wade was involved in all major FDA legislative initiatives, oversight hearings, and other Senate HELP Committee activities concerning the FDA and the Federal Food, Drug, and Cosmetic Act. From January 2015 through June 2016, he helped negotiate many of the FDA-related provisions in the 21st Century Cures Act, which included reforms to FDA’s review and approval of new drugs, devices, combination products, and digital health software. He also worked closely with the FDA and other stakeholders as Congress examined legislative reforms in other key areas, including diagnostics and laboratory developed tests, cosmetics, and over-the-counter drugs.

Before taking his Senate role, Wade served for more than five years as Associate Chief Counsel within the FDA’s Office of Chief Counsel. He was responsible for providing legal advice to the FDA’s Center for Drug Evaluation and Research (CDER) and the Office of Commissioner (OC) on a wide range of issues. While at FDA, he also helped to develop and implement the Food and Drug Administration Safety and Innovation Act (FDASIA) of 2012 and the Drug Quality and Security Act (DQSA) of 2013—both significant reforms to FDA’s regulatory authorities.

Photo of Scott Danzis Scott Danzis

Scott Danzis is a partner in Covington’s Food, Drug, and Device Practice Group and chairs the Firm’s Medical Device Industry Group. Scott is a leading expert on the regulation of medical devices, diagnostics, and digital health. He regularly helps clients navigate their most…

Scott Danzis is a partner in Covington’s Food, Drug, and Device Practice Group and chairs the Firm’s Medical Device Industry Group. Scott is a leading expert on the regulation of medical devices, diagnostics, and digital health. He regularly helps clients navigate their most complex regulatory challenges, including strategies for premarket review, postmarket compliance, and enforcement actions. Scott counsels many of the world’s preeminent medical device companies on a range of matters, including advertising and promotion, recalls, quality system issues, medical device reporting, clinical and non-clinical testing, FDA inspections, and other regulatory matters.

Scott previously served in FDA’s Office of the Chief Counsel where he served as the Special Assistant to the Chief Counsel of FDA. At FDA, Scott was involved in a wide range of legal and regulatory matters, including significant rulemaking, enforcement actions, and legislative initiatives.

Scott speaks regularly at conferences regarding FDA regulation of devices and diagnostics, and since 2010 serves as an Adjunct Professor of Law at the Georgetown University Law Center, where he teaches a course on FDA law.

Scott is a graduate of the University of Virginia School of Law where he was the Editor-in-Chief of the Virginia Law Review and elected to the Order of Coif. He also holds a Master’s Degree from George Washington University and a Bachelor of Science from Cornell University.

From 2006 to 2008, Scott served as the Special Assistant to the Chief Counsel of the U.S. Food and Drug Administration. While at FDA, he was broadly involved in a wide range of legal and regulatory matters related to medical devices and drugs. He also worked on implementing key provisions of the Food and Drug Administration Amendments Act of 2007.

Scott has significant experience in the following areas:

  • FDA regulatory strategies, including strategies for the premarket review (510(k)s, PMAs) of medical devices;
  • Appeals and dispute resolution within FDA;
  • IDEs, INDs, and clinical trial regulation;
  • Advertising, promotion, and scientific exchange, including responding to enforcement actions and investigations;
  • Imports and exports of FDA regulated products;
  • QSR and cGMP requirements, including responding to FDA 483s and enforcement actions;
  • Product recalls;
  • Adverse event and MDR reporting;
  • FDA consent decrees and OIG corporate integrity agreements;
  • Regulatory due diligence;
  • Compliance with antifraud statutes, including the anti-kickback statute and the False Claims Act.

Scott recently developed and edited a book on the regulation of in vitro diagnostic products and laboratory testing, In Vitro Diagnostics: The Complete Regulatory Guide (FDLI, 2010). He currently serves as an Adjunct Professor at the Georgetown University Law Center where he teaches a course on the regulation of drugs, biologics, and medical devices.

Scott clerked for the Honorable Chester J. Straub on the U.S. Court of Appeals for the Second Circuit. He is a graduate of the University of Virginia School of Law where he was the Editor-in-Chief of the Virginia Law Review and elected to the Order of the Coif. He holds a Masters Degree from George Washington University in Health Care Management and Policy, and a Bachelor of Science from Cornell University.