FDA has long recognized the significant potential of artificial intelligence- and machine learning- (AI/ML-) based software as a medical device (SaMD) to transform health care as well as the unique challenges presented by AI/ML-based software under the Agency’s traditional medical device regulatory framework.  On January 12, 2021, FDA issued the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan describing the Agency’s multi-pronged approach to facilitating innovation in AI/ML-based SaMD while advancing the Agency’s regulatory oversight.  The Action Plan builds on and responds to stakeholder feedback on the Agency’s April 2019 proposed regulatory framework for FDA review of postmarket modifications to AI/ML-based devices that continually evolve based on real-world learning and adaptation.  Some key action items outlined in the Action Plan and associated takeaways are:

  1. FDA will update and issue guidance for a proposed framework for postmarket changes to AI/ML-based software.  The Agency’s April 2019 proposed framework described an approach that relied on a pre-determined change control plan that included SaMD Pre-Specifications (SPS) (i.e., what aspects the manufacturer intends to change through learning) and the Algorithm Change Protocol (ACP) (i.e., how the algorithm will learn and change while remaining safe and effective on the market).  While the industry generally supported the framework, many details related to its implementation remained open.  In the Action Plan, FDA intends to issue draft guidance in 2021 about what should be included in an SPS and ACP to support the safety and effectiveness of AI/ML-based SaMD.  This draft guidance will be an important step in the practical implementation of the framework proposed in April 2019 and facilitating FDA review and marketing of AI/ML-based SaMD that continually evolves on the market after FDA clearance or approval.
  2. Harmonize good machine learning practices (GMLP).  The April 2019 draft framework also relied on the concept of GMLP, i.e., well-established best practices and standards for AI/ML-based software including practices related to data management, feature extraction, training, interpretability, evaluation, and documentation.  Adherence to GMLP is therefore important not only for the development of quality AI/ML-based software but also to facilitating FDA regulatory oversight.  While numerous groups are engaged in efforts to develop and define GMLP, including several with FDA participation, FDA outlined the need to have a well-established standard for GMLP, which in turn would inform software developers as they implement GMLP that will satisfy FDA.  In the Action Plan, FDA recognizes that it needs to take a more active role in developing standardized GMLPs and is committing to deepening its work with the communities engaged in GMLP development to encourage consensus outcomes.
  3. Support transparency and trust through patient-centered approaches.  AI/ML-based software raises unique considerations regarding transparency, accountability, equity and trust, which FDA describes as necessitating a proactive patient-centered approach to device development.  In particular, stakeholders have raised challenges regarding transparency and how AI/ML-based devices are described to users in device labeling.  As part of the Action Plan, FDA proposes to host a public workshop to elicit input on patient-centered approaches in device labeling to support transparency and inform the Agency’s thinking on the types of information that should be included in the labeling for AI/ML-based SaMD.
  4. Support development of methodologies to identify and eliminate bias and promote robustness.  While bias and generalizability is a challenge across medical device development and clinical trials, these challenges are heightened for AI/ML-based systems given their potential for mirroring biases present in the datasets used for development and training as well as the opacity of the algorithms.  In recognition of these challenges and the need for SaMD to be suited for racially and ethnically diverse patients, FDA plans to support numerous regulatory research efforts for the identification and evaluation of bias.  Concerns related to bias may receive additional attention in the Biden-Harris Administration given an overarching priority of addressing health disparities.  The Administration has shown dedication to addressing racial disparities, especially in health care and created a COVID-19 Racial and Ethnic Disparities Task Force to confront the racial and ethnic disparities of the pandemic.  Moreover, the Chair of the Senate HELP Committee with jurisdiction over FDA, Senator Patty Murray (D-WA), released a report in September 2020 which outlined inequalities in the American health care system and provided recommendations on Congressional action to effectively address them.
  5. Enhance clarity of real-world performance monitoring processes.  The April 2019 proposed framework contemplated a total product lifecycle approach to the oversight of AI/ML-based SaMD including collection and monitoring of real-world data to support software modifications.  While stakeholders supported a total lifestyle approach and the role of real-world performance monitoring, many questions remained open as to how this approach should be implemented.  As part of the Action Plan, FDA will support piloting of real-world performance monitoring by working with stakeholders on a voluntary basis through coordination with other ongoing FDA programs focused on the use of real-world data.  The goal of the pilots is to develop frameworks for gathering and utilizing real world performance metrics as well as thresholds and performance evaluations for the metrics.
Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Christina Kuhn Christina Kuhn

Christina Kuhn provides pharmaceutical and medical device companies advice on a variety of federal and state regulatory matters.

Photo of Olivia Dworkin Olivia Dworkin

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance. With a focus on cutting-edge…

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance. With a focus on cutting-edge medical technologies and digital health products and services, Olivia regularly helps new and established companies navigate a variety of state and federal regulatory, legislative, and compliance matters throughout the total product lifecycle. She has experience counseling clients on the development, FDA regulatory classification, and commercialization of digital health tools, including clinical decision support software, mobile medical applications, general wellness products, medical device data systems, administrative support software, and products that incorporate artificial intelligence, machine learning, and other emerging technologies.

Olivia also assists clients in advocating for legislative and regulatory policies that will support innovation and the safe deployment of digital health tools, including by drafting comments on proposed legislation, frameworks, whitepapers, and guidance documents. Olivia keeps close to the evolving regulatory landscape and is a frequent contributor to Covington’s Digital Health blog. Her work also has been featured in the Journal of Robotics, Artificial Intelligence & Law, Law360, and the Michigan Journal of Law and Mobility.