On April 2, 2019, FDA released a discussion paper entitled “Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” (the “AI Framework”). The AI Framework is the Agency’s first policy document describing a potential regulatory approach for medical devices that use artificial intelligence (“AI”) and machine learning (“ML”). The AI Framework does not establish new requirements or an official policy, but rather was released by FDA to seek early input prior to the development of a draft guidance. FDA acknowledges that the approach “may require additional statutory authority to implement fully.”

In an accompanying press release, former FDA Commissioner Scott Gottlieb outlined the need for a “more tailored” regulatory paradigm for algorithms that learn and adapt in the real world. FDA’s medical device regulation scheme was not designed for dynamic machine learning algorithms, as the Agency traditionally encounters products that are static at the time of FDA review. The AI Framework is FDA’s attempt to develop “an appropriate framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet [FDA’s] gold standard for safety and effectiveness throughout the product’s lifecycle.”

To date, FDA has cleared or approved several AI/ML-based devices, but these have involved “locked” algorithms prior to marketing. A “locked” algorithm provides the same result each time the same input is applied and does not learn or evolve with use. Because machine learning algorithms can continuously learn and adapt over time, FDA’s AI Framework intends to develop an approach suitable for modifications to software containing both locked and dynamic algorithms.

In issuing the AI Framework, FDA joins a number of other regulators and independent groups worldwide that have issued guidance related to the development of AI-based products.  To provide just a few examples, the High-Level Expert Group organized by the European Commission recently published its final “Ethics Guidelines for Trustworthy AI.”  China is moving forward with its AI development strategy.  Singapore has published a draft framework for public consultation. The UK government is setting up a new Centre for Data Ethics and Innovation to advise the UK government on measures it should take to ensure safe and ethical innovation in AI.  The French CNIL has published guidance on AI and ethics, and legislation has been introduced in the U.S. on ethical development in AI. As with these other initiatives, FDA’s AI Framework is an initial step in setting standards and providing guidance for developing AI technology.

Overview of the FDA’s AI Framework

FDA’s AI Framework characterizes AI and ML as “techniques used to design and train software algorithms to learn from and act on data.” Because AI/ML devices are intended to be dynamic and evolve over time, a key issue for FDA is how FDA’s traditional framework for assessing changes or modifications to the device would apply to an AI/ML device. The AI Framework acknowledges the challenges with regulating a continuously learning device and accepts that some AI/ML-based SaMD modifications may not require FDA review. The proposed framework characterizes modifications into three broad categories:

  • Type i – Modifications related to performance, with no change in the intended use or new input type.
  • Type ii – Modifications related to inputs, with no change in intended use.
  • Type iii – Modifications to SaMD’s intended use.

The AI Framework borrows from and builds on the Pre-Cert Program that FDA unveiled in 2017. As with the Pre-Cert Program, FDA proposes to apply a total product lifecycle regulatory approach for regulating AI/ML-based SaMD, with a developer-based component.  FDA proposes a four-step approach:

  • Culture of Quality and Organizational Excellence. Manufacturers of AI/ML-based SaMD should have an established quality system with good machine learning practices (“GMLP”).
  • Initial Premarket Assurance of Safety and Effectiveness. The AI Framework anticipates that manufacturers would submit a plan for modifications as part of initial premarket review for an AI/ML-based SaMD.
    • A manufacturer would submit a “predetermined change control plan” for FDA’s initial premarket review. The predetermined change control plan would include information about the types of anticipated modifications to the software (“SaMD Pre-Specifications” or “SPS”) and the associated methodology (the “Algorithm Change Protocol” or “ACP”) to implement those changes.
    • ACP components may include specific methods of how the algorithm will learn and change throughout the lifecycle of a device. An ACP should contain the following, among other components: (1) a data management plan; (2) protocols for re-training and optimizing the algorithm; (3) performance evaluation protocols; and (4) update procedures that describe how updated devices will be tested, distributed, and communicated.
    • FDA would review a SPS and ACP submitted through premarket review, and manufacturers may be able to rely on the predetermined change control plan to support future device modifications.
  • Modifications after Initial Review. The AI Framework proposes an approach to manage risks from AI/ML modifications. FDA expects manufacturers to evaluate modifications based on risks to patients. If a modification is outside of agreed SPS and ACP but does not lead to a new intended use, FDA may conduct a “focused review” of the proposed SPS and ACP. If a modification is beyond the intended use for which the SaMD was previously authorized, manufacturers may need to submit a new premarket submission.
  • Real World Performance Monitoring. FDA expects manufacturers to periodically report updates that were implemented as part of an approved SPS and ACP and performance metrics for SaMD. FDA also expects manufacturers to be transparent about notifying users of updates. Manufacturers should monitor the real-world performance of AI/ML-based SaMD.

FDA provides hypothetical examples of AI/ML-based SaMD modifications in an appendix to the AI Framework. The examples categorize each change into “type i,” “type ii,” or “type iii” modifications and provide an explanation of what regulatory submissions could be required for each change (e.g. updated labeling, communication updates, FDA focused review, premarket submission). For instance, FDA discusses a skin lesion mobile medical app that uses images taken by a smartphone to provide information to a dermatologist on the physical characteristics of a skin lesion. In one hypothetical, the manufacturer would collect real-world data from use of the app, and the app would use the data to improve its assessment of skin lesions. Under the AI Framework, the manufacturer would be able to market the modified algorithm without additional FDA review, but the manufacturer would need to update the label and communicate the app’s improved performance to users. By contrast, premarket review could be required if the manufacturer distributed a version of the app that is patient facing and would direct patients to follow-up with a dermatologist based on a preliminary malignancy reading.

The AI Framework’s Unresolved Considerations

The AI Framework leaves several issues unresolved. These considerations include:

  • FDA will need to determine how the AI Framework will align with FDA’s existing classification and authorization scheme. The risk-classification scheme in the AI Framework relies on the International Medical Device Regulators Forum (“IMDRF”) risk categorization framework, which differs from the FDA classification system of Class I, II, and III. With respect to premarket review, is unclear how information suggested in the AI Framework would be included in a 510(k), De Novo submission, or a premarket approval (“PMA”) application. In January 2019, FDA indicated that it would use the De Novo classification process to implement the Pre-Cert Program. FDA has not indicated whether it might use a similar approach to implement the AI Framework.
  • The AI Framework proposes an approach to regulate AI/ML-based SaMD, but FDA has yet to address explicitly whether AI/ML-based health software could be excluded from the definition of a device under the health software provisions enacted as part of the 21st Century Cures Act (the “Cures Act”). The AI Framework does not address whether AI/ML-based functions can fall under the Cures Act carve-outs because “non-device software functions” are outside the scope of the AI Framework.
  • The proposed framework leaves unanswered questions for pharmaceutical companies offering AI/ML-based software solutions that they market or are marketed on their behalf. In fact, the AI Framework does not mention AI/ML-based software associated with a drug or biologic, and FDA’s Center for Drug Evaluation and Research (“CDER”) and Center for Biologics Evaluation and Research (“CBER”) did not join the discussion draft. In December 2018, CDER and CBER published a framework for prescription drug-use-related software, which takes a different approach to digital health as compared to the policies of the Center for Devices and Radiological Health (“CDRH”). Differences between the centers’ approaches for AI/ML-based software could lead to similar software being subject to different regulatory requirements depending on which Center has the lead role in regulating the product.
  • The AI Framework does not address the use of AI/ML-based software for regulatory purposes, such as in research and development, for post-market safety surveillance, or around use of real-world data for regulatory purposes.

Opportunities to Provide Feedback

The AI Framework is still in early stages of development, providing stakeholders an opportunity to provide feedback to FDA and help shape its approach for AI-ML-based devices. The AI Framework asks section-specific questions throughout the discussion paper and poses eighteen general questions for stakeholder input. FDA has asked stakeholders to submit comments to FDA by June 3, 2019 through an electronic docket.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Scott Danzis Scott Danzis

Scott Danzis is a partner in Covington’s Food, Drug, and Device Practice Group and chairs the Firm’s Medical Device Industry Group. Scott is a leading expert on the regulation of medical devices, diagnostics, and digital health. He regularly helps clients navigate their most…

Scott Danzis is a partner in Covington’s Food, Drug, and Device Practice Group and chairs the Firm’s Medical Device Industry Group. Scott is a leading expert on the regulation of medical devices, diagnostics, and digital health. He regularly helps clients navigate their most complex regulatory challenges, including strategies for premarket review, postmarket compliance, and enforcement actions. Scott counsels many of the world’s preeminent medical device companies on a range of matters, including advertising and promotion, recalls, quality system issues, medical device reporting, clinical and non-clinical testing, FDA inspections, and other regulatory matters.

Scott previously served in FDA’s Office of the Chief Counsel where he served as the Special Assistant to the Chief Counsel of FDA. At FDA, Scott was involved in a wide range of legal and regulatory matters, including significant rulemaking, enforcement actions, and legislative initiatives.

Scott speaks regularly at conferences regarding FDA regulation of devices and diagnostics, and since 2010 serves as an Adjunct Professor of Law at the Georgetown University Law Center, where he teaches a course on FDA law.

Scott is a graduate of the University of Virginia School of Law where he was the Editor-in-Chief of the Virginia Law Review and elected to the Order of Coif. He also holds a Master’s Degree from George Washington University and a Bachelor of Science from Cornell University.

From 2006 to 2008, Scott served as the Special Assistant to the Chief Counsel of the U.S. Food and Drug Administration. While at FDA, he was broadly involved in a wide range of legal and regulatory matters related to medical devices and drugs. He also worked on implementing key provisions of the Food and Drug Administration Amendments Act of 2007.

Scott has significant experience in the following areas:

  • FDA regulatory strategies, including strategies for the premarket review (510(k)s, PMAs) of medical devices;
  • Appeals and dispute resolution within FDA;
  • IDEs, INDs, and clinical trial regulation;
  • Advertising, promotion, and scientific exchange, including responding to enforcement actions and investigations;
  • Imports and exports of FDA regulated products;
  • QSR and cGMP requirements, including responding to FDA 483s and enforcement actions;
  • Product recalls;
  • Adverse event and MDR reporting;
  • FDA consent decrees and OIG corporate integrity agreements;
  • Regulatory due diligence;
  • Compliance with antifraud statutes, including the anti-kickback statute and the False Claims Act.

Scott recently developed and edited a book on the regulation of in vitro diagnostic products and laboratory testing, In Vitro Diagnostics: The Complete Regulatory Guide (FDLI, 2010). He currently serves as an Adjunct Professor at the Georgetown University Law Center where he teaches a course on the regulation of drugs, biologics, and medical devices.

Scott clerked for the Honorable Chester J. Straub on the U.S. Court of Appeals for the Second Circuit. He is a graduate of the University of Virginia School of Law where he was the Editor-in-Chief of the Virginia Law Review and elected to the Order of the Coif. He holds a Masters Degree from George Washington University in Health Care Management and Policy, and a Bachelor of Science from Cornell University.

Photo of Wade Ackerman Wade Ackerman

Wade Ackerman advises companies and trade associations on complex and novel FDA regulatory issues that require coordinated legal, regulatory, and public policy strategies.

Through more than 19 years of experience in private practice and positions within the FDA and on Capitol Hill, Wade…

Wade Ackerman advises companies and trade associations on complex and novel FDA regulatory issues that require coordinated legal, regulatory, and public policy strategies.

Through more than 19 years of experience in private practice and positions within the FDA and on Capitol Hill, Wade has acquired unique insights into the evolving legal and regulatory landscape facing companies marketing FDA-regulated products. He co-leads Covington’s multidisciplinary Digital Health Initiative, which brings together the firm’s considerable global resources to advise life sciences and health technology clients harnessing the power of information technology and data to create new and cutting-edge innovations to improve health and achieve better outcomes for patients.

Until June 2016, Wade served as Senior FDA Counsel to the U.S. Senate Health Education, Labor & Pensions (HELP) Committee Ranking Member Patty Murray (D-WA) and, prior to that, Chairman Tom Harkin (D-IA). While at the HELP Committee, Wade was involved in all major FDA legislative initiatives, oversight hearings, and other Senate HELP Committee activities concerning the FDA and the Federal Food, Drug, and Cosmetic Act. From January 2015 through June 2016, he helped negotiate many of the FDA-related provisions in the 21st Century Cures Act, which included reforms to FDA’s review and approval of new drugs, devices, combination products, and digital health software. He also worked closely with the FDA and other stakeholders as Congress examined legislative reforms in other key areas, including diagnostics and laboratory developed tests, cosmetics, and over-the-counter drugs.

Before taking his Senate role, Wade served for more than five years as Associate Chief Counsel within the FDA’s Office of Chief Counsel. He was responsible for providing legal advice to the FDA’s Center for Drug Evaluation and Research (CDER) and the Office of Commissioner (OC) on a wide range of issues. While at FDA, he also helped to develop and implement the Food and Drug Administration Safety and Innovation Act (FDASIA) of 2012 and the Drug Quality and Security Act (DQSA) of 2013—both significant reforms to FDA’s regulatory authorities.

Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection…

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence, data processing for connected devices, biometrics, online advertising, endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, e-mail marketing, disclosures of video viewing information, and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based, global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.