On March 15, 2024, FDA’s medical product centers – CBER, CDER, and CDRH – along with the Office of Combination Products (OCP) published a paper outlining their key areas of focus for the development and use of artificial intelligence (AI) across the medical product life cycle.  The paper, entitled “Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together,” is intended by the Agency to “provide greater transparency regarding how FDA’s medical product Centers are collaborating to safeguard public health while fostering responsible and ethical innovation.”  The FDA paper is the latest in series of informal statements from the Agency about the use of AI in the discovery, development, manufacturing, and commercialization of medical products, as well as for medical devices that incorporate AI.  Here are five key takeaways from FDA’s recent paper.

  1. The Centers continue to emphasize a risk-based regulatory framework for AI that builds upon existing FDA initiatives.

Consistent with FDA’s longstanding approach to regulation of medical products, FDA’s paper recognizes the value of a risk-based approach for regulating AI that the Agency oversees.  The paper highlights how “AI management requires a risk-based regulatory framework built on robust principles, standards, best practices, and state-of-the-art regulatory science tools that can be applied across AI applications and be tailored to the relevant medical product” and, to the extent feasible, “can be applied across various medical products and uses within the health care delivery system.”

As part of this risk-based approach, the Centers also plan to leverage and continue building upon existing FDA initiatives for the evaluation and regulation of AI used in medical products, including FDA’s May 2023 Discussion Paper on Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products, the Center for Drug Evaluation and Research (CDER) Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative, and the Center for Devices & Radiological Health (CDRH) January 2021 AI/ML-Based Software as a Medical Device (SaMD) Action Plan.   

  1. FDA plans to release several AI guidance documents this year, providing an opportunity for engagement.

The paper notes that the Centers intend to develop policies that provide regulatory predictability and clarity for the use of AI, while also supporting innovation.  Planned FDA guidance documents include:

  • Draft guidance on life cycle management considerations and premarket submission recommendations for AI-enabled device software functions.  As background, in June 2023, FDA released a final guidance entitled “Content of Premarket Submissions for Device Software Functions.”  The title of the proposed draft guidance on CDRH’s guidance agenda suggests that the Agency’s premarket submission recommendations may differ for AI-enabled device software functions, and it is likely that the new draft guidance will directly address novel premarket submission issues raised by incorporating AI into device software functions.
  • Draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biological products.  The title of this planned draft guidance is similar to FDA’s August 2023 final guidance entitled “Considerations for the Use of Real-World Data and Real-World Evidence to Support Regulatory Decision-Making for Drug and Biological Products,” which focused on RWD/E and did not discuss AI.  The planned draft guidance on CDER’s guidance agenda may provide additional insights on the use of AI in RWE studies.  FDA also has previously given attention to internal infrastructure needed to assess regulatory submissions that include data from Digital Health Technologies (DHTs).  For example, in March 2023 the Agency issued a Framework for the Use of DHTs in Drug and Biological Product Development that stated FDA plans to “enhance its IT capabilities to support the review of DHT-generated data,” including by establishing “a secure cloud technology to enhance its infrastructure and analytics environment that will enable FDA to effectively receive, aggregate, store, and process large volumes of data.”  The new proposed draft guidance could build upon the themes outlined in this framework, with a specific focus on AI.
  • Final guidance on marking submission recommendations for predetermined change control plans for AI-enabled medical device software functions.  FDA plans to finalize the Agency’s April 2023 draft guidance on predetermined change control plans (PCCPs).  PCCPs describe planned changes that may be made to a device that otherwise would require premarket review by the Agency, facilitating iterative improvements through modifications to an AI- or machine learning-enabled device while continuing to provide a reasonable assurance of device safety and effectiveness.  The final guidance likely will incorporate or address any feedback the Agency has received on the draft guidance and may also address real-world challenges the Agency has faced or “lessons learned” from reviewing submitted PCCPs to date.

The publication of these guidance documents will open the door for public comments and additional engagement opportunities, and life sciences and medical device companies should consider submitting comments. 

  1. Mitigating bias continues to be a front-burner issue.

Mitigating bias and discrimination continues to be top-of-mind at FDA.  The paper highlights several demonstration projects and initiatives the Centers plan to support in an effort to identify and reduce the risk of biases in AI tools, including:

  • Regulatory science efforts to develop methodology for evaluating AI algorithms, identifying and mitigating bias, and ensuring the robustness and resilience of AI algorithms to withstand changing clinical inputs and conditions.
  • Demonstration projects that (1) highlight different points where bias can be introduced in the AI development life cycle and how it can be addressed, including through risk management; and (2) consider health inequities associated with the use of AI in medical product development to promote equity and ensure data representativeness, leveraging ongoing diversity, equity, and inclusion efforts.
  • Best practices for documenting and ensuring that data used to train and test AI models are fit for use, including adequately representing the target population.
  • Considerations for evaluating the safe, responsible, and ethical use of AI in the medical product life cycle.

These actions align with the Agency’s overarching efforts to develop methodologies for identification and elimination of bias, as well as President Biden’s October 2023 AI Executive Order that called for federal guidance and resources on the incorporation of equity principles in AI-enabled technologies used in the health sector, the use of disaggregated data on affected populations and representative population data sets when developing new models, and the monitoring of algorithmic performance against discrimination and bias.

  1. The paper focuses on the total product life cycle.

The Centers intend to support various projects and initiatives centered around performance monitoring and ensuring reliability throughout the total product life cycle.  Specifically, the Centers intend to support:

  • Demonstration projects that support the ongoing monitoring of AI tools to ensure adherence to standards and that the tools maintain performance and reliability throughout their life cycle. 
  • A framework and strategy for quality assurance of AI-enabled tools or systems used in the medical product life cycle, which emphasize continued monitoring and mitigation of risks. 
  • Best practices for long-term safety and real-world performance monitoring of AI-enabled medical products.
  • Educational initiatives for regulatory bodies, health care professional, patients, researchers, and industry as they navigate the safe and responsible use of AI in medical product development and in medical products.

Real-world performance monitoring and ensuring quality throughout the total product life cycle has been a hot topic for some time.  For example, President Biden’s AI Executive Order directed the formation of an AI Task Force to, in part, identify guidance and resources on long-term and real-world performance monitoring of AI technologies in the health sector, including “clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers, and users.”  Stakeholders have asked FDA for clarity on best practices for real-world performance monitoring for AI/ML-based software in the past, and FDA’s 2021 AI Action Plan stated that the Agency would support the piloting of real-world performance monitoring by working with stakeholders on a voluntary basis and developing frameworks for gathering and utilizing real-world performance metrics as well as thresholds and performance evaluations for the metrics.  Additionally, FDA’s May 2023 AI Discussion Paper expressed the importance of evaluating AI/ML models over time to consider the model risk and credibility, and solicited feedback on examples of best practices being used by stakeholders to monitor AI/ML models.  FDA’s collaborations with stakeholders on these efforts over the past years could inform future guidance.

  1. The paper emphasizes the importance of collaboration and international harmonization.

The paper highlights the importance of the Centers’ current collaboration with a variety of stakeholders, including developers, patient groups, academia, and global regulators, in cultivating a patient-centered regulatory approach that emphasizes collaboration and health equity.  The paper notes the Centers’ intent to continue fostering these collaborative partnerships, including by continuing to solicit input from interested parties on “critical aspects” of the use of AI in medical products such as transparency, explainability, governance, bias, cybersecurity, and quality assurance. 

Perhaps in an effort to facilitate collaboration with various stakeholders, the Director of FDA’s Digital Health Center of Excellence, Troy Tazbaz, recently joined the Board of Directors for the Coalition for Health AI.  He joins Micky Tripathi, National Coordinator for Health Information Technology within the Department of Health and Human Services (HHS), and several other representatives from academia, industry, and medical centers.  Tazbaz and Tripathi also will serve on CHAI’s “Government Advisory Board” along with Melanie Fontes Rainer, director of the Office for Civil Rights within HHS, and several other representatives from the White House Office of Science and Technology Policy, the Centers for Disease Control and Prevention, the Centers for Medicare & Medicaid Services, the Veterans Health Administration, and the Advanced Research Projects Agency for Health.

The paper also notes the Centers’ intention to continue to work closely with global collaborators to “promote international cooperation on standards, guidelines, and best practices to encourage consistency and convergence in the use and evaluation of AI across the medical product landscape.”  FDA has collaborated with Health Canada and UK’s MHRA in the past to develop guiding principles for Good Machine Learning Practices and PCCPs for machine learning-enabled medical devices.  Also, recently, FDA took a step toward international harmonization by issuing a proposed rule to amend the Quality System Regulation to incorporate by reference international standard ISO 13485.  These actions indicate that regulators are working towards a united front through close alignment on best practices and standards.

Looking Ahead

We expect to see many more policies, frameworks, guidance documents, and initiatives centered around AI in the coming months.  It remains to be seen, however, how FDA’s approach to AI will intersect with broader efforts to regulate AI.  For example, emerging proposals to regulate AI could potentially apply to AI that also is regulated by FDA, but few address the overlap with FDA’s existing medical product authorities.  For instance, some proposals focus on types of AI technologies (e.g., requirements to label all content generated by generative AI regardless of the intended use), whereas other approaches take a sector-specific approach and recognize that FDA’s existing regulatory frameworks already govern certain uses of AI (e.g., Senator Cassidy’s white paper on the deployment of AI in healthcare settings, which disfavored a one-size-fits-all approach to AI regulation and instead called for the leveraging of existing frameworks). 

But even sector-specific approaches may result in regulatory requirements that overlap with FDA requirements for FDA-regulated AI.  For example, in January 2024, HHS’s ONC published a final rule revising the certification requirements for health IT developers, which included requirements for AI-based “predictive decision support interventions” enabled by or interfacing with health IT.  Many predictive decision support interventions under the ONC final rule may also be FDA-regulated medical devices.  While ONC stated that it collaborated with FDA to maximize alignment, ultimately, developers of medical device software that also is a predictive decision support intervention will need to assess compliance with both FDA’s and ONC’s requirements.

In short, it will be critical to monitor developments and craft engagement strategies as policy-makers continue to collaborate and draw new lines around AI regulation.

On March 11, 2024, the UK Government published its response (“Government Response”) to an independent review on equity in medical devices commissioned by the Department of Health and Social Care (“Review”).  The Government Response is not guidance nor policy rather it is intended to act as an action plan for tackling potential bias in the design and use of medical devices.  The Government Response fully accepts, and in turn makes a series of commitments in response to, the findings of the Review (which is broken down into 18 recommendations, 51 sub-recommendations, and 3 further calls to action).  Importantly, the Government “wholeheartedly agrees…that medical technology should be unbiased and equitable.” 

Bias in the medical device space, and within healthcare more broadly is (quite rightly) a topic of growing importance to governments, regulators and industry alike.  We set out some key points of interest from both the Review and the Government Response below.  We look forward to seeing more developments and guidance in this area going forwards given its significance to patients and the delivery of healthcare.  

Continue Reading UK Government Outlines New Action to Tackle Biases in Medical Devices

Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor, and Pensions (“HELP”) Committee, published on February 21, 2024, a white paper with various proposals to update privacy protections for health data. In Part 1 of this blog series (see here), we discussed the first section of Senator Cassidy’s February 21, 2024, white paper. Specifically, we summarized Senator Cassidy’s proposals on how to update the existing framework of the Health Insurance Portability and Accountability Act, as amended, and its implementing regulations (collectively, “HIPAA”) without disrupting decades of case law and precedent. In this blog post, we discuss the other sections of the white paper, namely proposals to protect other sources of health data not currently covered by HIPAA.

Continue Reading Senator Cassidy Issues White Paper with Proposals to Update Health Data Privacy Framework – Part 2: Safeguarding Health Data Not Covered by HIPAA 

On February 21, 2024, Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor, and Pensions (“HELP”) Committee, issued a white paper, “Strengthening Health Data Privacy for Americans: Addressing the Challenges of the Modern Era,” which proposes several updates to the privacy protections for health data. This follows Senator Cassidy’s September 2023 request for information from stakeholders about how to enhance health data privacy protections covered by the Health Insurance Portability and Accountability Act (“HIPAA”) framework and to consider privacy protections for other sources of health data not currently covered by HIPAA. The white paper notes that several entities, including trade associations, hospitals, health technology companies, and think tanks, responded to the RFI.

Continue Reading Senator Cassidy Issues White Paper with Proposals to Update Health Data Privacy Framework – Part 1: Updates to the HIPAA Framework

On February 16, 2024, the U.S. Department of Health and Human Services (“HHS”) published a final rule to amend the Confidentiality of Substance Use Disorder (“SUD”) Patient Records regulations (“Part 2”) to more closely align Part 2 with the Health Insurance Portability and Accountability Act of 1996, as amended, and its implementing regulations (collectively, “HIPAA”) as required by Section 3221 of the 2020 Coronavirus Aid, Relief, and Economic Security Act (“CARES Act”).  We previously covered the proposed rule (hereinafter, “the NPRM”), which was issued on December 2, 2022.

The final rule, issued through the Office for Civil Rights (“OCR”) and the Substance Abuse and Mental Health Services Administration (“SAMHSA”), increases alignment between certain Part 2 requirements and HIPAA and it clarifies certain existing Part 2 permissions and restrictions to improve the ability of entities to use and disclose Part 2 records. According to HHS, this final rule will decrease burdens on patients and providers, improve coordination of care and access to care and treatment, and protect the confidentiality of treatment records.

Continue Reading HHS Publishes Final Rule to Align Part 2 and HIPAA

On December 5, 2023, the Spanish presidency of the Council of the EU issued a declaration to strengthen collaboration with Member States and the European Commission to develop a leading quantum technology ecosystem in Europe.

The declaration acknowledges the revolutionary potential of quantum computing, which uses quantum mechanics principles and quantum bits known as “qubits” to solve complex mathematical problems exponentially faster than classical computers.

The declaration was launched with eight Member State signatories (Denmark, Finland, Germany, Greece, Hungary, Italy, Slovenia, and Sweden), and invites other Member States to sign. By doing so, they agree to recognize the “strategic importance of quantum technologies for the scientific and industrial competitiveness of the EU” and commit to collaborating to make Europe the “’quantum valley’ of the world, the leading region globally for quantum excellence and innovation.

Continue Reading Quantum Computing: Action in the EU and Potential Impacts

Earlier today, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.

  • Safety & Security StandardsThe Executive Order sets out several required actions for developers of AI systems.  Notably, the White House, “in accordance with the Defense Production Action,” will require companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government. Relatedly, the Executive Order directs certain federal agencies to undertake the following actions and initiatives:
    • National Institute of Standards and Technology:  establish standards for red-teaming required before the public release of an AI system. 
    • Department of Homeland Security:  apply the NIST standards to use of AI in critical infrastructure sectors and establish an AI Safety and Security Board. 
    • Departments of Energy and Homeland Security:  address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks; it also calls for the creation of standards for biological synthesis screening.
    • Department of Commerce:  develop guidance for content authentication and watermarking to label content generated by AI and received by the government; it also suggests that federal agencies would be required to use these tools.
    • National Security Council & White House Chief of Staff:  develop a National Security Memorandum that ensures that the United States military and intelligence community use AI safely, ethically, and effectively.
  • Privacy.  The Executive Order sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques; strengthening privacy-preserving research and technologies (e.g., cryptographic tools); evaluating how agencies collect and use commercially available information containing personal data (including information procured from data brokers); and developing guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques.
  • Equity and Civil RightsTo address concerns around equity and civil rights, the Executive Order sets forth a number of new requirements for documentation, coordination, and best practices.  Specifically, the Executive Order calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The Executive Order also requires training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations.  Additionally, the Executive Order calls for the creation of best practices on the use of AI in sentencing, parole, probation, and related topics for the criminal justice system.
  • Consumers, Patients, and StudentsRecognizing that AI can have unique impacts on consumers, patients, and students, the Executive Order directs agencies and certain sectors to engage in the following activities: 
    • Healthcare:  To help advance the responsible use of AI in healthcare and the life sciences sector, including for drug development, the Department of Health and Human Services must establish a safety program to receive reports of, and act to remedy, harms or unsafe AI healthcare practices (see our prior post about FDA’s thinking on the use of AI and digital health technologies in drug development here).
    • Education:  The Executive Order calls for the creation of resources to support educators deploying AI-enabled tools.
    • Workers:  After acknowledging that AI offers the promise of improved productivity, the Executive Order requires the development of principles and best practices to mitigate harms and maximize benefits for workers, including by addressing job displacement and guidance for evaluating job applications fairly.  It also requires the production of a report on AI’s potential labor-market impacts.
  • Innovation and Competition.  The Executive Order sets forth a number of priorities to ensure American leadership in AI innovation, such as the creation of a pilot of the National AI Research Resource, a tool that will provide AI researchers and students with key AI resources and data and grant programs.  The Executive Order also discusses updates to the immigration process for individuals with expertise in critical AI areas.  Importantly, the Executive Order directs the Federal Trade Commission to exercise its legal authority to encourage a competitive AI ecosystem.
  • Leadership on International Frameworks.  The Executive Order directs the State Department and Department of Commerce to lead an effort to establish international frameworks governing AI.  These efforts are intended to support and compliment the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.
  • Government Use of AIThe Executive Order recognizes the potential for AI systems to drive better results in the delivery of government services to citizens and directs certain actions to ensure the responsible deployment of AI by government entities.  For example, it requires guidance for agencies’ use of AI, improvements to the process for government acquisition of AI, and the hiring of AI talent by governments.

We are continuing to review and digest the content of the Executive Order and expect to supplement this post with additional observations across our blogs.

On September 27, 2023, Governor Newsom signed AB 254 and AB 352, which both amend the California Confidentiality of Medical Information Act (“CMIA”).  Specifically, AB 254 expands the scope of the CMIA to expressly cover reproductive or sexual health services that are delivered through digital health solutions and the associated health information generated from these services.  AB 352 imposes new requirements on how electronic health record (“EHR”) systems must store medical information related to gender affirming care, abortion and related services, and contraception and the ability of providers of health care, health care service plans, contractors, or employers to disclose such information.

Continue Reading California Enacts Amendments to the CMIA

On September 15, the Federal Trade Commission (“FTC”) and U.S. Department of Health and Human Services (“HHS”) announced an updated joint publication describing the privacy and security laws and rules that impact consumer health data.  Specifically, the “Collecting, Using, or Sharing Consumer Health Information? Look to HIPAA, the FTC Act, and the Health Breach Notification Rule” guidance provides an overview of the Health Insurance Portability and Accountability Act, as amended, and the implementing regulations issued by HHS (collectively “HIPAA”); the FTC Act; and the FTC’s Health Breach Notification Rule (“HBNR”) and how they may apply to businesses.  This joint guidance follows a recent surge of FTC enforcement in the health privacy space.  We offer a high level summary of the requirements flagged by the guidance.

Continue Reading FTC and HHS Announce Updated Health Privacy Publication

On September 6, Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor and Pensions (HELP) Committee, issued a white paper about the oversight and legislative role of Congress related to the deployment of artificial intelligence (AI) in areas under the HELP Committee’s jurisdiction, including health and life sciences.  In the white paper, Senator Cassidy disfavors a one-size-fits-all approach to the regulation of AI and instead calls for a flexible approach that leverages existing frameworks depending on the particular context of use of AI.  “[O]nly if our current frameworks are unable to accommodate . . . AI, should Congress look to create new ones or modernize existing ones.”  The Senator seeks public feedback on the white paper by September 22, 2023.  Health care and life sciences stakeholders should consider providing comments. 

This blog outlines five key takeaways from the white paper from a health care and life sciences perspective. Note that beyond health and life sciences issues, the white paper also addresses considerations for other areas, such as use of AI in educational settings and labor/employment implications created by use of AI.


5 Key Takeaways for AI in Health Care and Life Sciences

The white paper – entitled “Exploring Congress’ Framework for the Future of AI: The Oversight and Legislative Role of Congress Over the Integration of Artificial Intelligence in Health, Education, and Labor” – describes the “enormous good” that AI in health care presents, such as “the potential to help create new cures, improve care, and reduce administrative burdens and overall health care spending.”  At the same time, Senator Cassidy notes that AI presents risks that legal frameworks should seek to minimize.  Five key takeaways from the white paper include:

  1. Senator Cassidy emphasizes that a one-size-fits all approach will not work for AI and effectively grounds many of the broader, ongoing AI policy considerations into the HELP Committee’s core expertise and existing regulatory frameworks for health-related AI.  Many of the emerging frameworks for defining trustworthy or responsible AI and establishing AI risk management practices are drafted broadly for all uses of AI and may not reflect that AI applications in different sectors present unique challenges.  Leveraging the HELP Committee’s expertise around health regulatory frameworks (as well as other sectors within the Committee’s jurisdiction and expertise), the white paper concludes that the context of use greatly affects how policymakers should think about AI’s benefits and risks.  In other words, the white paper recognizes that AI deployed in healthcare settings requires a different regulatory approach compared to AI deployed in educational or employment settings.  Senator Cassidy expresses that a “sweeping, one-size-fits-all approach for regulating AI will not work and will stifle, not foster, innovation.” 
  2. Changes to FDA’s device framework may be on the horizon.  The white paper expresses that Congress should only look to modernize existing frameworks (or create new ones) if current frameworks are unable to accommodate AI.  For example, the white paper acknowledges that the existing framework for preclinical and clinical investigation of new drugs is “generally well-suited to adapt to the use of AI to research and develop new drugs.”  In contrast, Senator Cassidy specifically notes that FDA’s medical device framework was not designed to accommodate AI that may improve over time, signaling potential future action by the HELP Committee to amend relevant provisions of the Federal Food, Drug, and Cosmetic Act (FDCA) to clarify how FDA will treat medical devices that integrate AI. 
  3. There are a variety of applications of AI that will benefit the healthcare and life sciences sector and, ultimately, the public health, including:
    • Pharmaceutical research and development, such as with disease target and drug candidate identification and/or design;
    • Diagnostic and treatment applications, from early disease detection to AI applications intended to help identify and reduce medical errors;
    • Patient- and provider-facing support, including internally-developed clinical decision support (CDS) algorithms and AI interfaces that engage directly with patients;
    • Health care administration and coverage, including claims management, surgical scheduling, generation of replies to patient messages, summarization of patient medical histories, and translation between languages and reading levels for patient materials; and
    • Use of AI to increase the speed and efficiency of FDA’s review processes.
  4. The acknowledgement of these important use cases in the health and life sciences sector leaves open FDA regulatory questions.  For example:
    • As noted above, the white paper is fairly explicit on the point that changes to FDA’s regulatory framework may be required to address AI, but Senator Cassidy leaves open for comment what specific types of changes might need to be made.
    • For AI that does not meet the definition of a medical device (or is subject to enforcement discretion by FDA), Senator Cassidy leaves open for comment how health-related AI should be regulated (e.g., who is responsible for training clinicians before use of certain AI tools described in the white paper, and what standards does such training need to meet).
    • FDA expertise will be critical as AI plays a larger role in health and life sciences, and Senator Cassidy leaves open for comment how Congress should help FDA address these challenges. 
    • Where FDA incorporates AI into its own work, including premarket review processes, the white paper leaves open how sponsors and the public will know what review elements are being performed by AI and whether a unique process will be needed to appeal AI-based decisions within the Agency.
  5. Bias and transparency continue to be front-burner issues.  The discussion of bias and transparency in the white paper confirms that Congress is still focused on how to manage these issues in AI regulation.  The white paper states that AI tools should be developed in a transparent way that provides an understanding about how any given algorithm was designed, but leaves open for comment what specific guidelines and steps should satisfy this need.  The white paper also notes how any framework must build in a “clear method to measure effectiveness” and that Congress may need to consider how to best ensure that AI-enabled products do not give undue weight to potential biases. 

Bonus Takeaway: Healthcare applications for AI may create ambiguities about liability. The white paper states that stakeholders need a clear understanding of potential liability around the use of AI. Specifically, the white paper highlights open questions about liability assignment between the original developer, the most recent developer, clinicians, or others.

Request for Stakeholder Feedback

Recognizing that the “insights of stakeholders that can describe the advantages and drawbacks of AI in our health care system . . . are critical as policy makers grapple with this topic,” Senator Cassidy requests “feedback and comments for ways to improve the framework in which these technologies are developed, reviewed, and used” by Friday, September 22.  Although feedback is not confined to these topics, the white paper poses the following questions for consideration specific to health care:

Supporting Medical Innovation:

  • How can FDA support the use of AI to design and develop new drugs and biologics?
  • What updates to the regulatory frameworks for drugs and biologics should Congress consider to facilitate innovation in AI applications?
  • How can FDA improve the use of AI in medical devices?
  • What updates to the regulatory frameworks for medical devices should Congress consider to facilitate innovation in AI applications while also ensuring that products are safe and effective for patients?
  • How can Congress help FDA ensure that it has access to the expertise required to review products that are developed using AI or that incorporate AI?
  • How can FDA better leverage AI to review product submissions?
  • How can FDA harness external expertise to support review of products that are developed using AI or that incorporate AI?
  • What are the potential consequences of regulating AI in the United States if it remains unregulated in other countries?

Medical Ethics and Protecting Patients:

  • What existing standards are in place to demonstrate clinical validity when leveraging AI?  What gaps exist in those standards?
  • What practices are in place to mitigate bias in AI decision-making?
  • What should be the federal role, if any, in addressing social and/or political bias?
  • How can AI be best adopted to not inappropriately deny patients care?
  • Is the current HIPAA framework equipped to safeguard patient privacy with regards to AI in clinical settings?  If not, how not or how to better equip the framework?
  • What standards are in place to ensure that AI maintains respect and dignity for human life from conception to natural death?
  • Who should be responsible for determining safe and appropriate applications of AI algorithms?
  • Who should be liable for unsafe or inappropriate applications of AI algorithms?  The developer?  A regulating body?  A third party or private entity?