On April 2, 2019, FDA released a discussion paper entitled “Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” (the “AI Framework”). The AI Framework is the Agency’s first policy document describing a potential regulatory approach for medical devices that use artificial intelligence (“AI”) and machine learning (“ML”). The AI Framework does not establish new requirements or an official policy, but rather was released by FDA to seek early input prior to the development of a draft guidance. FDA acknowledges that the approach “may require additional statutory authority to implement fully.”

In an accompanying press release, former FDA Commissioner Scott Gottlieb outlined the need for a “more tailored” regulatory paradigm for algorithms that learn and adapt in the real world. FDA’s medical device regulation scheme was not designed for dynamic machine learning algorithms, as the Agency traditionally encounters products that are static at the time of FDA review. The AI Framework is FDA’s attempt to develop “an appropriate framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet [FDA’s] gold standard for safety and effectiveness throughout the product’s lifecycle.”

To date, FDA has cleared or approved several AI/ML-based devices, but these have involved “locked” algorithms prior to marketing. A “locked” algorithm provides the same result each time the same input is applied and does not learn or evolve with use. Because machine learning algorithms can continuously learn and adapt over time, FDA’s AI Framework intends to develop an approach suitable for modifications to software containing both locked and dynamic algorithms.

In issuing the AI Framework, FDA joins a number of other regulators and independent groups worldwide that have issued guidance related to the development of AI-based products.  To provide just a few examples, the High-Level Expert Group organized by the European Commission recently published its final “Ethics Guidelines for Trustworthy AI.”  China is moving forward with its AI development strategy.  Singapore has published a draft framework for public consultation. The UK government is setting up a new Centre for Data Ethics and Innovation to advise the UK government on measures it should take to ensure safe and ethical innovation in AI.  The French CNIL has published guidance on AI and ethics, and legislation has been introduced in the U.S. on ethical development in AI. As with these other initiatives, FDA’s AI Framework is an initial step in setting standards and providing guidance for developing AI technology.

Overview of the FDA’s AI Framework

FDA’s AI Framework characterizes AI and ML as “techniques used to design and train software algorithms to learn from and act on data.” Because AI/ML devices are intended to be dynamic and evolve over time, a key issue for FDA is how FDA’s traditional framework for assessing changes or modifications to the device would apply to an AI/ML device. The AI Framework acknowledges the challenges with regulating a continuously learning device and accepts that some AI/ML-based SaMD modifications may not require FDA review. The proposed framework characterizes modifications into three broad categories:

  • Type i – Modifications related to performance, with no change in the intended use or new input type.
  • Type ii – Modifications related to inputs, with no change in intended use.
  • Type iii – Modifications to SaMD’s intended use.

The AI Framework borrows from and builds on the Pre-Cert Program that FDA unveiled in 2017. As with the Pre-Cert Program, FDA proposes to apply a total product lifecycle regulatory approach for regulating AI/ML-based SaMD, with a developer-based component.  FDA proposes a four-step approach:

  • Culture of Quality and Organizational Excellence. Manufacturers of AI/ML-based SaMD should have an established quality system with good machine learning practices (“GMLP”).
  • Initial Premarket Assurance of Safety and Effectiveness. The AI Framework anticipates that manufacturers would submit a plan for modifications as part of initial premarket review for an AI/ML-based SaMD.
    • A manufacturer would submit a “predetermined change control plan” for FDA’s initial premarket review. The predetermined change control plan would include information about the types of anticipated modifications to the software (“SaMD Pre-Specifications” or “SPS”) and the associated methodology (the “Algorithm Change Protocol” or “ACP”) to implement those changes.
    • ACP components may include specific methods of how the algorithm will learn and change throughout the lifecycle of a device. An ACP should contain the following, among other components: (1) a data management plan; (2) protocols for re-training and optimizing the algorithm; (3) performance evaluation protocols; and (4) update procedures that describe how updated devices will be tested, distributed, and communicated.
    • FDA would review a SPS and ACP submitted through premarket review, and manufacturers may be able to rely on the predetermined change control plan to support future device modifications.
  • Modifications after Initial Review. The AI Framework proposes an approach to manage risks from AI/ML modifications. FDA expects manufacturers to evaluate modifications based on risks to patients. If a modification is outside of agreed SPS and ACP but does not lead to a new intended use, FDA may conduct a “focused review” of the proposed SPS and ACP. If a modification is beyond the intended use for which the SaMD was previously authorized, manufacturers may need to submit a new premarket submission.
  • Real World Performance Monitoring. FDA expects manufacturers to periodically report updates that were implemented as part of an approved SPS and ACP and performance metrics for SaMD. FDA also expects manufacturers to be transparent about notifying users of updates. Manufacturers should monitor the real-world performance of AI/ML-based SaMD.

FDA provides hypothetical examples of AI/ML-based SaMD modifications in an appendix to the AI Framework. The examples categorize each change into “type i,” “type ii,” or “type iii” modifications and provide an explanation of what regulatory submissions could be required for each change (e.g. updated labeling, communication updates, FDA focused review, premarket submission). For instance, FDA discusses a skin lesion mobile medical app that uses images taken by a smartphone to provide information to a dermatologist on the physical characteristics of a skin lesion. In one hypothetical, the manufacturer would collect real-world data from use of the app, and the app would use the data to improve its assessment of skin lesions. Under the AI Framework, the manufacturer would be able to market the modified algorithm without additional FDA review, but the manufacturer would need to update the label and communicate the app’s improved performance to users. By contrast, premarket review could be required if the manufacturer distributed a version of the app that is patient facing and would direct patients to follow-up with a dermatologist based on a preliminary malignancy reading.

The AI Framework’s Unresolved Considerations

The AI Framework leaves several issues unresolved. These considerations include:

  • FDA will need to determine how the AI Framework will align with FDA’s existing classification and authorization scheme. The risk-classification scheme in the AI Framework relies on the International Medical Device Regulators Forum (“IMDRF”) risk categorization framework, which differs from the FDA classification system of Class I, II, and III. With respect to premarket review, is unclear how information suggested in the AI Framework would be included in a 510(k), De Novo submission, or a premarket approval (“PMA”) application. In January 2019, FDA indicated that it would use the De Novo classification process to implement the Pre-Cert Program. FDA has not indicated whether it might use a similar approach to implement the AI Framework.
  • The AI Framework proposes an approach to regulate AI/ML-based SaMD, but FDA has yet to address explicitly whether AI/ML-based health software could be excluded from the definition of a device under the health software provisions enacted as part of the 21st Century Cures Act (the “Cures Act”). The AI Framework does not address whether AI/ML-based functions can fall under the Cures Act carve-outs because “non-device software functions” are outside the scope of the AI Framework.
  • The proposed framework leaves unanswered questions for pharmaceutical companies offering AI/ML-based software solutions that they market or are marketed on their behalf. In fact, the AI Framework does not mention AI/ML-based software associated with a drug or biologic, and FDA’s Center for Drug Evaluation and Research (“CDER”) and Center for Biologics Evaluation and Research (“CBER”) did not join the discussion draft. In December 2018, CDER and CBER published a framework for prescription drug-use-related software, which takes a different approach to digital health as compared to the policies of the Center for Devices and Radiological Health (“CDRH”). Differences between the centers’ approaches for AI/ML-based software could lead to similar software being subject to different regulatory requirements depending on which Center has the lead role in regulating the product.
  • The AI Framework does not address the use of AI/ML-based software for regulatory purposes, such as in research and development, for post-market safety surveillance, or around use of real-world data for regulatory purposes.

Opportunities to Provide Feedback

The AI Framework is still in early stages of development, providing stakeholders an opportunity to provide feedback to FDA and help shape its approach for AI-ML-based devices. The AI Framework asks section-specific questions throughout the discussion paper and poses eighteen general questions for stakeholder input. FDA has asked stakeholders to submit comments to FDA by June 3, 2019 through an electronic docket.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Scott Danzis Scott Danzis

Scott Danzis practice focuses primarily on the regulation of medical devices, but also includes regulation of drugs, biologics, and tobacco products. Mr. Danzis regularly works with companies in developing strategies for interacting with the U.S. Food and Drug Administration (FDA), including strategies for…

Scott Danzis practice focuses primarily on the regulation of medical devices, but also includes regulation of drugs, biologics, and tobacco products. Mr. Danzis regularly works with companies in developing strategies for interacting with the U.S. Food and Drug Administration (FDA), including strategies for clinical development and premarket review (including appeals and dispute resolution, when needed). He also advises on compliance with postmarket requirements, including advertising and promotion restrictions, quality system and manufacturing requirements, postmarket reporting, recalls, and enforcement actions.

Photo of Wade Ackerman Wade Ackerman

Through more than a decade of experience in private practice and positions within the FDA and on the Hill, Wade Ackerman has acquired unique insights into the evolving legal and regulatory landscape facing companies marketing FDA-regulated products. Mr. Ackerman advises clients on FDA…

Through more than a decade of experience in private practice and positions within the FDA and on the Hill, Wade Ackerman has acquired unique insights into the evolving legal and regulatory landscape facing companies marketing FDA-regulated products. Mr. Ackerman advises clients on FDA regulatory matters across a range of sectors, including drugs and biologics, cosmetics, medical devices and diagnostics, and digital health products and services associated with drugs and traditional devices. He serves as one of the leaders of Covington’s multidisciplinary Digital Health Initiative, which brings together the firm’s considerable resources across the broad array of legal, regulatory, commercial, and policy issues relating to the development and marketing of digital health technologies.

Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection…

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence, data processing for connected devices, biometrics, online advertising, endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, e-mail marketing, disclosures of video viewing information, and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based, global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.