On September 6, Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor and Pensions (HELP) Committee, issued a white paper about the oversight and legislative role of Congress related to the deployment of artificial intelligence (AI) in areas under the HELP Committee’s jurisdiction, including health and life sciences.  In the white paper, Senator Cassidy disfavors a one-size-fits-all approach to the regulation of AI and instead calls for a flexible approach that leverages existing frameworks depending on the particular context of use of AI.  “[O]nly if our current frameworks are unable to accommodate . . . AI, should Congress look to create new ones or modernize existing ones.”  The Senator seeks public feedback on the white paper by September 22, 2023.  Health care and life sciences stakeholders should consider providing comments. 

This blog outlines five key takeaways from the white paper from a health care and life sciences perspective. Note that beyond health and life sciences issues, the white paper also addresses considerations for other areas, such as use of AI in educational settings and labor/employment implications created by use of AI.


5 Key Takeaways for AI in Health Care and Life Sciences

The white paper – entitled “Exploring Congress’ Framework for the Future of AI: The Oversight and Legislative Role of Congress Over the Integration of Artificial Intelligence in Health, Education, and Labor” – describes the “enormous good” that AI in health care presents, such as “the potential to help create new cures, improve care, and reduce administrative burdens and overall health care spending.”  At the same time, Senator Cassidy notes that AI presents risks that legal frameworks should seek to minimize.  Five key takeaways from the white paper include:

  1. Senator Cassidy emphasizes that a one-size-fits all approach will not work for AI and effectively grounds many of the broader, ongoing AI policy considerations into the HELP Committee’s core expertise and existing regulatory frameworks for health-related AI.  Many of the emerging frameworks for defining trustworthy or responsible AI and establishing AI risk management practices are drafted broadly for all uses of AI and may not reflect that AI applications in different sectors present unique challenges.  Leveraging the HELP Committee’s expertise around health regulatory frameworks (as well as other sectors within the Committee’s jurisdiction and expertise), the white paper concludes that the context of use greatly affects how policymakers should think about AI’s benefits and risks.  In other words, the white paper recognizes that AI deployed in healthcare settings requires a different regulatory approach compared to AI deployed in educational or employment settings.  Senator Cassidy expresses that a “sweeping, one-size-fits-all approach for regulating AI will not work and will stifle, not foster, innovation.” 
  2. Changes to FDA’s device framework may be on the horizon.  The white paper expresses that Congress should only look to modernize existing frameworks (or create new ones) if current frameworks are unable to accommodate AI.  For example, the white paper acknowledges that the existing framework for preclinical and clinical investigation of new drugs is “generally well-suited to adapt to the use of AI to research and develop new drugs.”  In contrast, Senator Cassidy specifically notes that FDA’s medical device framework was not designed to accommodate AI that may improve over time, signaling potential future action by the HELP Committee to amend relevant provisions of the Federal Food, Drug, and Cosmetic Act (FDCA) to clarify how FDA will treat medical devices that integrate AI. 
  3. There are a variety of applications of AI that will benefit the healthcare and life sciences sector and, ultimately, the public health, including:
    • Pharmaceutical research and development, such as with disease target and drug candidate identification and/or design;
    • Diagnostic and treatment applications, from early disease detection to AI applications intended to help identify and reduce medical errors;
    • Patient- and provider-facing support, including internally-developed clinical decision support (CDS) algorithms and AI interfaces that engage directly with patients;
    • Health care administration and coverage, including claims management, surgical scheduling, generation of replies to patient messages, summarization of patient medical histories, and translation between languages and reading levels for patient materials; and
    • Use of AI to increase the speed and efficiency of FDA’s review processes.
  4. The acknowledgement of these important use cases in the health and life sciences sector leaves open FDA regulatory questions.  For example:
    • As noted above, the white paper is fairly explicit on the point that changes to FDA’s regulatory framework may be required to address AI, but Senator Cassidy leaves open for comment what specific types of changes might need to be made.
    • For AI that does not meet the definition of a medical device (or is subject to enforcement discretion by FDA), Senator Cassidy leaves open for comment how health-related AI should be regulated (e.g., who is responsible for training clinicians before use of certain AI tools described in the white paper, and what standards does such training need to meet).
    • FDA expertise will be critical as AI plays a larger role in health and life sciences, and Senator Cassidy leaves open for comment how Congress should help FDA address these challenges. 
    • Where FDA incorporates AI into its own work, including premarket review processes, the white paper leaves open how sponsors and the public will know what review elements are being performed by AI and whether a unique process will be needed to appeal AI-based decisions within the Agency.
  5. Bias and transparency continue to be front-burner issues.  The discussion of bias and transparency in the white paper confirms that Congress is still focused on how to manage these issues in AI regulation.  The white paper states that AI tools should be developed in a transparent way that provides an understanding about how any given algorithm was designed, but leaves open for comment what specific guidelines and steps should satisfy this need.  The white paper also notes how any framework must build in a “clear method to measure effectiveness” and that Congress may need to consider how to best ensure that AI-enabled products do not give undue weight to potential biases. 

Bonus Takeaway: Healthcare applications for AI may create ambiguities about liability. The white paper states that stakeholders need a clear understanding of potential liability around the use of AI. Specifically, the white paper highlights open questions about liability assignment between the original developer, the most recent developer, clinicians, or others.

Request for Stakeholder Feedback

Recognizing that the “insights of stakeholders that can describe the advantages and drawbacks of AI in our health care system . . . are critical as policy makers grapple with this topic,” Senator Cassidy requests “feedback and comments for ways to improve the framework in which these technologies are developed, reviewed, and used” by Friday, September 22.  Although feedback is not confined to these topics, the white paper poses the following questions for consideration specific to health care:

Supporting Medical Innovation:

  • How can FDA support the use of AI to design and develop new drugs and biologics?
  • What updates to the regulatory frameworks for drugs and biologics should Congress consider to facilitate innovation in AI applications?
  • How can FDA improve the use of AI in medical devices?
  • What updates to the regulatory frameworks for medical devices should Congress consider to facilitate innovation in AI applications while also ensuring that products are safe and effective for patients?
  • How can Congress help FDA ensure that it has access to the expertise required to review products that are developed using AI or that incorporate AI?
  • How can FDA better leverage AI to review product submissions?
  • How can FDA harness external expertise to support review of products that are developed using AI or that incorporate AI?
  • What are the potential consequences of regulating AI in the United States if it remains unregulated in other countries?

Medical Ethics and Protecting Patients:

  • What existing standards are in place to demonstrate clinical validity when leveraging AI?  What gaps exist in those standards?
  • What practices are in place to mitigate bias in AI decision-making?
  • What should be the federal role, if any, in addressing social and/or political bias?
  • How can AI be best adopted to not inappropriately deny patients care?
  • Is the current HIPAA framework equipped to safeguard patient privacy with regards to AI in clinical settings?  If not, how not or how to better equip the framework?
  • What standards are in place to ensure that AI maintains respect and dignity for human life from conception to natural death?
  • Who should be responsible for determining safe and appropriate applications of AI algorithms?
  • Who should be liable for unsafe or inappropriate applications of AI algorithms?  The developer?  A regulating body?  A third party or private entity?
Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Wade Ackerman Wade Ackerman

Wade Ackerman advises companies and trade associations on complex and novel FDA regulatory issues that require coordinated legal, regulatory, and public policy strategies.

Through more than 19 years of experience in private practice and positions within the FDA and on Capitol Hill, Wade…

Wade Ackerman advises companies and trade associations on complex and novel FDA regulatory issues that require coordinated legal, regulatory, and public policy strategies.

Through more than 19 years of experience in private practice and positions within the FDA and on Capitol Hill, Wade has acquired unique insights into the evolving legal and regulatory landscape facing companies marketing FDA-regulated products. He co-leads Covington’s multidisciplinary Digital Health Initiative, which brings together the firm’s considerable global resources to advise life sciences and health technology clients harnessing the power of information technology and data to create new and cutting-edge innovations to improve health and achieve better outcomes for patients.

Until June 2016, Wade served as Senior FDA Counsel to the U.S. Senate Health Education, Labor & Pensions (HELP) Committee Ranking Member Patty Murray (D-WA) and, prior to that, Chairman Tom Harkin (D-IA). While at the HELP Committee, Wade was involved in all major FDA legislative initiatives, oversight hearings, and other Senate HELP Committee activities concerning the FDA and the Federal Food, Drug, and Cosmetic Act. From January 2015 through June 2016, he helped negotiate many of the FDA-related provisions in the 21st Century Cures Act, which included reforms to FDA’s review and approval of new drugs, devices, combination products, and digital health software. He also worked closely with the FDA and other stakeholders as Congress examined legislative reforms in other key areas, including diagnostics and laboratory developed tests, cosmetics, and over-the-counter drugs.

Before taking his Senate role, Wade served for more than five years as Associate Chief Counsel within the FDA’s Office of Chief Counsel. He was responsible for providing legal advice to the FDA’s Center for Drug Evaluation and Research (CDER) and the Office of Commissioner (OC) on a wide range of issues. While at FDA, he also helped to develop and implement the Food and Drug Administration Safety and Innovation Act (FDASIA) of 2012 and the Drug Quality and Security Act (DQSA) of 2013—both significant reforms to FDA’s regulatory authorities.

Photo of Scott Danzis Scott Danzis

Scott Danzis is a partner in Covington’s Food, Drug, and Device Practice Group and chairs the Firm’s Medical Device Industry Group. Scott is a leading expert on the regulation of medical devices, diagnostics, and digital health. He regularly helps clients navigate their most…

Scott Danzis is a partner in Covington’s Food, Drug, and Device Practice Group and chairs the Firm’s Medical Device Industry Group. Scott is a leading expert on the regulation of medical devices, diagnostics, and digital health. He regularly helps clients navigate their most complex regulatory challenges, including strategies for premarket review, postmarket compliance, and enforcement actions. Scott counsels many of the world’s preeminent medical device companies on a range of matters, including advertising and promotion, recalls, quality system issues, medical device reporting, clinical and non-clinical testing, FDA inspections, and other regulatory matters.

Scott previously served in FDA’s Office of the Chief Counsel where he served as the Special Assistant to the Chief Counsel of FDA. At FDA, Scott was involved in a wide range of legal and regulatory matters, including significant rulemaking, enforcement actions, and legislative initiatives.

Scott speaks regularly at conferences regarding FDA regulation of devices and diagnostics, and since 2010 serves as an Adjunct Professor of Law at the Georgetown University Law Center, where he teaches a course on FDA law.

Scott is a graduate of the University of Virginia School of Law where he was the Editor-in-Chief of the Virginia Law Review and elected to the Order of Coif. He also holds a Master’s Degree from George Washington University and a Bachelor of Science from Cornell University.

From 2006 to 2008, Scott served as the Special Assistant to the Chief Counsel of the U.S. Food and Drug Administration. While at FDA, he was broadly involved in a wide range of legal and regulatory matters related to medical devices and drugs. He also worked on implementing key provisions of the Food and Drug Administration Amendments Act of 2007.

Scott has significant experience in the following areas:

  • FDA regulatory strategies, including strategies for the premarket review (510(k)s, PMAs) of medical devices;
  • Appeals and dispute resolution within FDA;
  • IDEs, INDs, and clinical trial regulation;
  • Advertising, promotion, and scientific exchange, including responding to enforcement actions and investigations;
  • Imports and exports of FDA regulated products;
  • QSR and cGMP requirements, including responding to FDA 483s and enforcement actions;
  • Product recalls;
  • Adverse event and MDR reporting;
  • FDA consent decrees and OIG corporate integrity agreements;
  • Regulatory due diligence;
  • Compliance with antifraud statutes, including the anti-kickback statute and the False Claims Act.

Scott recently developed and edited a book on the regulation of in vitro diagnostic products and laboratory testing, In Vitro Diagnostics: The Complete Regulatory Guide (FDLI, 2010). He currently serves as an Adjunct Professor at the Georgetown University Law Center where he teaches a course on the regulation of drugs, biologics, and medical devices.

Scott clerked for the Honorable Chester J. Straub on the U.S. Court of Appeals for the Second Circuit. He is a graduate of the University of Virginia School of Law where he was the Editor-in-Chief of the Virginia Law Review and elected to the Order of the Coif. He holds a Masters Degree from George Washington University in Health Care Management and Policy, and a Bachelor of Science from Cornell University.

Photo of Christina Kuhn Christina Kuhn

Christina Kuhn advises medical device, pharmaceutical, and biotech companies on a broad range of FDA regulatory strategy and compliance matters. She has experience with cutting-edge and complex medical technologies, including software and digital health products, oncology products, next-generation sequencing, diagnostics, and combination products.…

Christina Kuhn advises medical device, pharmaceutical, and biotech companies on a broad range of FDA regulatory strategy and compliance matters. She has experience with cutting-edge and complex medical technologies, including software and digital health products, oncology products, next-generation sequencing, diagnostics, and combination products.

Christina frequently helps multinational device manufacturers as well as start-up device companies navigate the premarket regulatory process, advising companies on regulatory classification, clinical development strategy, and agency interactions. She also has significant experience counseling medical device companies on postmarket compliance requirements, including those related to advertising and promotion, quality systems and manufacturing, medical device reporting, registration and listing, and recalls. She advises clients on responding to and resolving enforcement actions, such as FDA inspections and Warning Letters as well as Department of Justice investigations.

Christina advises clients on, and performs regulatory due diligence for, corporate transactions, including acquisitions, public offerings, co-development agreements, and clinical trial agreements.

Christina also regularly assists industry associations and medical device and pharmaceutical companies in commenting on FDA guidance documents and rulemaking as well as drafting and analyzing federal legislation.

Christina is a frequent contributor to Covington’s Digital Health and InsideMedicalDevices blogs.

Photo of Olivia Dworkin Olivia Dworkin

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance.

With a focus on cutting-edge…

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance.

With a focus on cutting-edge medical technologies and digital health products and services, Olivia regularly helps new and established companies navigate a variety of state and federal regulatory, legislative, and compliance matters throughout the total product lifecycle. She has experience counseling clients on the development, FDA regulatory classification, and commercialization of digital health tools, including clinical decision support software, mobile medical applications, general wellness products, medical device data systems, administrative support software, and products that incorporate artificial intelligence, machine learning, and other emerging technologies.

Olivia also assists clients in advocating for legislative and regulatory policies that will support innovation and the safe deployment of digital health tools, including by drafting comments on proposed legislation, frameworks, whitepapers, and guidance documents. Olivia keeps close to the evolving regulatory landscape and is a frequent contributor to Covington’s Digital Health blog. Her work also has been featured in the Journal of Robotics, Artificial Intelligence & Law, Law360, and the Michigan Journal of Law and Mobility.

Prior to joining Covington, Olivia was a fellow at the University of Michigan Veterans Legal Clinic, where she gained valuable experience as the lead attorney successfully representing clients at case evaluations, mediations, and motion hearings. At Michigan Law, Olivia served as Online Editor of the Michigan Journal of Gender and Law, president of the Trial Advocacy Society, and president of the Michigan Law Mock Trial Team. She excelled in national mock trial competitions, earning two Medals for Excellence in Advocacy from the American College of Trial Lawyers and being selected as one of the top sixteen advocates in the country for an elite, invitation-only mock trial tournament.