On February 16, 2024, the U.S. Department of Health and Human Services (“HHS”) published a final rule to amend the Confidentiality of Substance Use Disorder (“SUD”) Patient Records regulations (“Part 2”) to more closely align Part 2 with the Health Insurance Portability and Accountability Act of 1996, as amended, and its implementing regulations (collectively, “HIPAA”) as required by Section 3221 of the 2020 Coronavirus Aid, Relief, and Economic Security Act (“CARES Act”).  We previously covered the proposed rule (hereinafter, “the NPRM”), which was issued on December 2, 2022.

The final rule, issued through the Office for Civil Rights (“OCR”) and the Substance Abuse and Mental Health Services Administration (“SAMHSA”), increases alignment between certain Part 2 requirements and HIPAA and it clarifies certain existing Part 2 permissions and restrictions to improve the ability of entities to use and disclose Part 2 records. According to HHS, this final rule will decrease burdens on patients and providers, improve coordination of care and access to care and treatment, and protect the confidentiality of treatment records.

Continue Reading HHS Publishes Final Rule to Align Part 2 and HIPAA

On December 5, 2023, the Spanish presidency of the Council of the EU issued a declaration to strengthen collaboration with Member States and the European Commission to develop a leading quantum technology ecosystem in Europe.

The declaration acknowledges the revolutionary potential of quantum computing, which uses quantum mechanics principles and quantum bits known as “qubits” to solve complex mathematical problems exponentially faster than classical computers.

The declaration was launched with eight Member State signatories (Denmark, Finland, Germany, Greece, Hungary, Italy, Slovenia, and Sweden), and invites other Member States to sign. By doing so, they agree to recognize the “strategic importance of quantum technologies for the scientific and industrial competitiveness of the EU” and commit to collaborating to make Europe the “’quantum valley’ of the world, the leading region globally for quantum excellence and innovation.

Continue Reading Quantum Computing: Action in the EU and Potential Impacts

Earlier today, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.

  • Safety & Security StandardsThe Executive Order sets out several required actions for developers of AI systems.  Notably, the White House, “in accordance with the Defense Production Action,” will require companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government. Relatedly, the Executive Order directs certain federal agencies to undertake the following actions and initiatives:
    • National Institute of Standards and Technology:  establish standards for red-teaming required before the public release of an AI system. 
    • Department of Homeland Security:  apply the NIST standards to use of AI in critical infrastructure sectors and establish an AI Safety and Security Board. 
    • Departments of Energy and Homeland Security:  address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks; it also calls for the creation of standards for biological synthesis screening.
    • Department of Commerce:  develop guidance for content authentication and watermarking to label content generated by AI and received by the government; it also suggests that federal agencies would be required to use these tools.
    • National Security Council & White House Chief of Staff:  develop a National Security Memorandum that ensures that the United States military and intelligence community use AI safely, ethically, and effectively.
  • Privacy.  The Executive Order sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques; strengthening privacy-preserving research and technologies (e.g., cryptographic tools); evaluating how agencies collect and use commercially available information containing personal data (including information procured from data brokers); and developing guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques.
  • Equity and Civil RightsTo address concerns around equity and civil rights, the Executive Order sets forth a number of new requirements for documentation, coordination, and best practices.  Specifically, the Executive Order calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The Executive Order also requires training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations.  Additionally, the Executive Order calls for the creation of best practices on the use of AI in sentencing, parole, probation, and related topics for the criminal justice system.
  • Consumers, Patients, and StudentsRecognizing that AI can have unique impacts on consumers, patients, and students, the Executive Order directs agencies and certain sectors to engage in the following activities: 
    • Healthcare:  To help advance the responsible use of AI in healthcare and the life sciences sector, including for drug development, the Department of Health and Human Services must establish a safety program to receive reports of, and act to remedy, harms or unsafe AI healthcare practices (see our prior post about FDA’s thinking on the use of AI and digital health technologies in drug development here).
    • Education:  The Executive Order calls for the creation of resources to support educators deploying AI-enabled tools.
    • Workers:  After acknowledging that AI offers the promise of improved productivity, the Executive Order requires the development of principles and best practices to mitigate harms and maximize benefits for workers, including by addressing job displacement and guidance for evaluating job applications fairly.  It also requires the production of a report on AI’s potential labor-market impacts.
  • Innovation and Competition.  The Executive Order sets forth a number of priorities to ensure American leadership in AI innovation, such as the creation of a pilot of the National AI Research Resource, a tool that will provide AI researchers and students with key AI resources and data and grant programs.  The Executive Order also discusses updates to the immigration process for individuals with expertise in critical AI areas.  Importantly, the Executive Order directs the Federal Trade Commission to exercise its legal authority to encourage a competitive AI ecosystem.
  • Leadership on International Frameworks.  The Executive Order directs the State Department and Department of Commerce to lead an effort to establish international frameworks governing AI.  These efforts are intended to support and compliment the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.
  • Government Use of AIThe Executive Order recognizes the potential for AI systems to drive better results in the delivery of government services to citizens and directs certain actions to ensure the responsible deployment of AI by government entities.  For example, it requires guidance for agencies’ use of AI, improvements to the process for government acquisition of AI, and the hiring of AI talent by governments.

We are continuing to review and digest the content of the Executive Order and expect to supplement this post with additional observations across our blogs.

On September 27, 2023, Governor Newsom signed AB 254 and AB 352, which both amend the California Confidentiality of Medical Information Act (“CMIA”).  Specifically, AB 254 expands the scope of the CMIA to expressly cover reproductive or sexual health services that are delivered through digital health solutions and the associated health information generated from these services.  AB 352 imposes new requirements on how electronic health record (“EHR”) systems must store medical information related to gender affirming care, abortion and related services, and contraception and the ability of providers of health care, health care service plans, contractors, or employers to disclose such information.

Continue Reading California Enacts Amendments to the CMIA

On September 15, the Federal Trade Commission (“FTC”) and U.S. Department of Health and Human Services (“HHS”) announced an updated joint publication describing the privacy and security laws and rules that impact consumer health data.  Specifically, the “Collecting, Using, or Sharing Consumer Health Information? Look to HIPAA, the FTC Act, and the Health Breach Notification Rule” guidance provides an overview of the Health Insurance Portability and Accountability Act, as amended, and the implementing regulations issued by HHS (collectively “HIPAA”); the FTC Act; and the FTC’s Health Breach Notification Rule (“HBNR”) and how they may apply to businesses.  This joint guidance follows a recent surge of FTC enforcement in the health privacy space.  We offer a high level summary of the requirements flagged by the guidance.

Continue Reading FTC and HHS Announce Updated Health Privacy Publication

On September 6, Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor and Pensions (HELP) Committee, issued a white paper about the oversight and legislative role of Congress related to the deployment of artificial intelligence (AI) in areas under the HELP Committee’s jurisdiction, including health and life sciences.  In the white paper, Senator Cassidy disfavors a one-size-fits-all approach to the regulation of AI and instead calls for a flexible approach that leverages existing frameworks depending on the particular context of use of AI.  “[O]nly if our current frameworks are unable to accommodate . . . AI, should Congress look to create new ones or modernize existing ones.”  The Senator seeks public feedback on the white paper by September 22, 2023.  Health care and life sciences stakeholders should consider providing comments. 

This blog outlines five key takeaways from the white paper from a health care and life sciences perspective. Note that beyond health and life sciences issues, the white paper also addresses considerations for other areas, such as use of AI in educational settings and labor/employment implications created by use of AI.


5 Key Takeaways for AI in Health Care and Life Sciences

The white paper – entitled “Exploring Congress’ Framework for the Future of AI: The Oversight and Legislative Role of Congress Over the Integration of Artificial Intelligence in Health, Education, and Labor” – describes the “enormous good” that AI in health care presents, such as “the potential to help create new cures, improve care, and reduce administrative burdens and overall health care spending.”  At the same time, Senator Cassidy notes that AI presents risks that legal frameworks should seek to minimize.  Five key takeaways from the white paper include:

  1. Senator Cassidy emphasizes that a one-size-fits all approach will not work for AI and effectively grounds many of the broader, ongoing AI policy considerations into the HELP Committee’s core expertise and existing regulatory frameworks for health-related AI.  Many of the emerging frameworks for defining trustworthy or responsible AI and establishing AI risk management practices are drafted broadly for all uses of AI and may not reflect that AI applications in different sectors present unique challenges.  Leveraging the HELP Committee’s expertise around health regulatory frameworks (as well as other sectors within the Committee’s jurisdiction and expertise), the white paper concludes that the context of use greatly affects how policymakers should think about AI’s benefits and risks.  In other words, the white paper recognizes that AI deployed in healthcare settings requires a different regulatory approach compared to AI deployed in educational or employment settings.  Senator Cassidy expresses that a “sweeping, one-size-fits-all approach for regulating AI will not work and will stifle, not foster, innovation.” 
  2. Changes to FDA’s device framework may be on the horizon.  The white paper expresses that Congress should only look to modernize existing frameworks (or create new ones) if current frameworks are unable to accommodate AI.  For example, the white paper acknowledges that the existing framework for preclinical and clinical investigation of new drugs is “generally well-suited to adapt to the use of AI to research and develop new drugs.”  In contrast, Senator Cassidy specifically notes that FDA’s medical device framework was not designed to accommodate AI that may improve over time, signaling potential future action by the HELP Committee to amend relevant provisions of the Federal Food, Drug, and Cosmetic Act (FDCA) to clarify how FDA will treat medical devices that integrate AI. 
  3. There are a variety of applications of AI that will benefit the healthcare and life sciences sector and, ultimately, the public health, including:
    • Pharmaceutical research and development, such as with disease target and drug candidate identification and/or design;
    • Diagnostic and treatment applications, from early disease detection to AI applications intended to help identify and reduce medical errors;
    • Patient- and provider-facing support, including internally-developed clinical decision support (CDS) algorithms and AI interfaces that engage directly with patients;
    • Health care administration and coverage, including claims management, surgical scheduling, generation of replies to patient messages, summarization of patient medical histories, and translation between languages and reading levels for patient materials; and
    • Use of AI to increase the speed and efficiency of FDA’s review processes.
  4. The acknowledgement of these important use cases in the health and life sciences sector leaves open FDA regulatory questions.  For example:
    • As noted above, the white paper is fairly explicit on the point that changes to FDA’s regulatory framework may be required to address AI, but Senator Cassidy leaves open for comment what specific types of changes might need to be made.
    • For AI that does not meet the definition of a medical device (or is subject to enforcement discretion by FDA), Senator Cassidy leaves open for comment how health-related AI should be regulated (e.g., who is responsible for training clinicians before use of certain AI tools described in the white paper, and what standards does such training need to meet).
    • FDA expertise will be critical as AI plays a larger role in health and life sciences, and Senator Cassidy leaves open for comment how Congress should help FDA address these challenges. 
    • Where FDA incorporates AI into its own work, including premarket review processes, the white paper leaves open how sponsors and the public will know what review elements are being performed by AI and whether a unique process will be needed to appeal AI-based decisions within the Agency.
  5. Bias and transparency continue to be front-burner issues.  The discussion of bias and transparency in the white paper confirms that Congress is still focused on how to manage these issues in AI regulation.  The white paper states that AI tools should be developed in a transparent way that provides an understanding about how any given algorithm was designed, but leaves open for comment what specific guidelines and steps should satisfy this need.  The white paper also notes how any framework must build in a “clear method to measure effectiveness” and that Congress may need to consider how to best ensure that AI-enabled products do not give undue weight to potential biases. 

Bonus Takeaway: Healthcare applications for AI may create ambiguities about liability. The white paper states that stakeholders need a clear understanding of potential liability around the use of AI. Specifically, the white paper highlights open questions about liability assignment between the original developer, the most recent developer, clinicians, or others.

Request for Stakeholder Feedback

Recognizing that the “insights of stakeholders that can describe the advantages and drawbacks of AI in our health care system . . . are critical as policy makers grapple with this topic,” Senator Cassidy requests “feedback and comments for ways to improve the framework in which these technologies are developed, reviewed, and used” by Friday, September 22.  Although feedback is not confined to these topics, the white paper poses the following questions for consideration specific to health care:

Supporting Medical Innovation:

  • How can FDA support the use of AI to design and develop new drugs and biologics?
  • What updates to the regulatory frameworks for drugs and biologics should Congress consider to facilitate innovation in AI applications?
  • How can FDA improve the use of AI in medical devices?
  • What updates to the regulatory frameworks for medical devices should Congress consider to facilitate innovation in AI applications while also ensuring that products are safe and effective for patients?
  • How can Congress help FDA ensure that it has access to the expertise required to review products that are developed using AI or that incorporate AI?
  • How can FDA better leverage AI to review product submissions?
  • How can FDA harness external expertise to support review of products that are developed using AI or that incorporate AI?
  • What are the potential consequences of regulating AI in the United States if it remains unregulated in other countries?

Medical Ethics and Protecting Patients:

  • What existing standards are in place to demonstrate clinical validity when leveraging AI?  What gaps exist in those standards?
  • What practices are in place to mitigate bias in AI decision-making?
  • What should be the federal role, if any, in addressing social and/or political bias?
  • How can AI be best adopted to not inappropriately deny patients care?
  • Is the current HIPAA framework equipped to safeguard patient privacy with regards to AI in clinical settings?  If not, how not or how to better equip the framework?
  • What standards are in place to ensure that AI maintains respect and dignity for human life from conception to natural death?
  • Who should be responsible for determining safe and appropriate applications of AI algorithms?
  • Who should be liable for unsafe or inappropriate applications of AI algorithms?  The developer?  A regulating body?  A third party or private entity?

Academic and commercial researchers using patient data are increasingly turning to real world data to gain broader insights into patient care and treatment. The UK is uniquely positioned to unlock the potential research, innovation and social value of health data generated by the NHS, and until recently, there has not been a unified approach to academic and commercial collaborations involving patient data.

The newly released guide to effective NHS data partnerships from NHS England (the “Guidance”), recognises the significant social and economic value that data partnerships can offer, and aims to provide guidance to NHS organisations on establishing data partnerships in order to promote consistency and streamline approaches across the NHS.

The Guidance relates to multiple different types of data partnerships including third-party requests to access NHS data assets for research and development purposes, validation of a solution to a healthcare problem requiring data access, or an NHS organisation reaching out to third parties to use data to solve a particular challenge.

Continue Reading NHS Data Partnerships Guide – unlocking the benefits of data partnerships

Hot on the heels of recent announcements from the U.S. Food and Drug Administration (see our prior blogs here), the European Medicines Agency (“EMA”) has joined the conversation on the use of Artificial Intelligence (“AI”) and Machine Learning (“ML”) technologies in the medicinal product lifecycle.

AI and ML have the potential to enhance every stage of the medicinal product lifecycle, from drug discovery, through to clinical development, manufacturing and post-market pharmacovigilance.  These technologies can display intelligent behaviour and can analyse huge amounts of data.  They are also extremely flexible as they can be trained using data, rather than explicit programming.  When used correctly, AI and ML can “effectively support the acquisition, transformation, analysis, and interpretation of data within the medicinal product lifecycle.”

However, the nature of these technologies also leads to certain risks.  Importantly, there can be a lack of transparency in the models.  Also, the data-driven approach means they can be prone to bias.  The EMA has therefore published a draft “Reflection paper on use of Artificial Intelligence (AI) in medicinal product lifecycle” (the “Draft Reflection Paper”), which is open to consultation until 31 December 2023.  The EMA sees the Draft Reflection Paper as a way to open “a dialogue with developers, academics, and other regulators.” 

What does the Draft Reflection Paper cover?

The Draft Reflection Paper sets out the EMA’s current thinking on the use of AI to “support the safe and effective development, regulation and use of … medicines.”  It applies primarily to human medicines, noting that while similar principles apply to veterinary medicines specific reflections/guidance are needed for the veterinary space. 

The purpose of the Draft Reflection Paper is to identify use of AI/ML that fall within the EMA’s/National Competent Authorities’ remit.  This obviously includes the use of AI in the medicinal product lifecycle but also extends to the use of medical devices with AI/ML technology that are used to generate evidence to support an EU marketing authorisation (i.e., used within the context of clinical trials or combined with the use of a medicinal product). 

Use of AI/ML in the medicines lifecycle

The EMA highlights as a “key principle” that marketing authorisation applicants (“Applicants”) and marketing authorisation holders (“MAH”) will bear responsibility for ensuring AI/ML they use is “fit for purpose and are in line with ethical, technical, scientific, and regulatory standards as described in GxP standards and current EMA scientific guidelines.” 

In summary, the Draft Reflection Paper requires that Applicants take a “risk-based approach for development, deployment and performance monitoring of AI and ML tools.”  The degree of risk will be determined by a number of factors, including: the AI technology itself; the context of use; the degree of influence of the AI/ML technology; and the stage of lifecycle of the medicinal product. 

The Draft Reflection Paper considers use of AI/ML at different stages along the product lifecycle and sets out principles and an indication of risk of applying AI/ML at each such stage:

  • Drug discovery — the EMA acknowledges that the use of AI/ML in drug discovery may be low risk from a regulatory perspective, “as the risk of non-optimal performance often mainly affects the sponsor.”  However, if results contribute to the total body of evidence presented for regulatory review then the regulatory risk increases.
  • Non-clinical development — AI/ML (e.g,AI/ML modelling approaches to replace, reduce, and refine the use of animals”) should follow Good Laboratory Practice (“GLP”), where applicable.  Applicants should consider Application of GLP Principles to Computerised Systems and GLP Data Integrity and their SOPs should cover AI/ML.
  • Clinical trials — AI/ML models (for example, that support selection of patients based on disease characteristics or clinical parameters) must comply with ICH GCP.  The regulatory risk for use of AI/ML increases from early stage to pivotal clinical trials.  Where models are generated for clinical trials, it is likely they will be considered part of the clinical trial data or trial protocol dossier and the models must be made available for regulators to assess at the time of marketing authorisation or clinical trial application. Where data collected/generated with AI/ML may impact the regulatory assessment of a medicine, the EMA recommends early regulatory interaction.
  • Precision medicine — the EMA considers the use of AI/ML in individualizing treatment (e.g., patient selection, dosing, de novo design of product variants) as high-risk from a medicines regulation perspective. The EMA recommends “special care … in defining what constitutes a change inposology (requiring a regulatory evaluation before implementation), to provide guidance that the prescribers can critically apprehend, and include fall-back treatment strategies in cases of technical failure.
  • Product information — AI/ML might be used to draft, compile, translate or review information documents.  Recognizing the risk of hallucinations (which may be plausible but erroneous output) by generative language models, the EMA expects use of such technologies only under “close human supervision.
  • Manufacturing — use of AI/ML in drug manufacturing is expected to increase in the future and the EMA notes that this must comply with relevant quality management principles.
  • Post-authorization phase — AI/ML is likely to have potential to support post-authorization safety and efficacy studies in human medicines, plus pharmacovigilance activities, such as adverse event report management and signal detection.  The MAH must “validate, monitor and document model performance and include AI/ML operations in the pharmacovigilance system, to mitigate risks related to all algorithms and models used.

Considerations for use of AI/ML

The Draft Reflection Paper sets out detailed measures that Applicants can take when using AI/ML technologies.  Some key points include:

  • Interacting with regulators: Applicants should carry out a regulatory impact and risk analysis.  The higher the regulatory impact or risk associated with the use of AI/ML technologies, the sooner the EMA recommends the Applicant engages with regulators to seek scientific advice.
  • Technical considerations:
    • Data acquisition: Applicants should use all efforts and active measures to avoid integration of bias in AI/ML and should document the source of data and the process of acquisition in a traceable manner in line with GxP. 
    • Training, validation and test data: the EMA discusses validation of models, which is importantly different from the concept of validation in the field of medicines.
    • Model development: the EMA encourages development and use of generalizable and robust models.
    • Performance Assessments: the Paper highlights the importance of selecting the correct metrics for performance assessments.
    • Interpretability and explainability: although transparent models are preferred, the EMA states that a “black box” model may be acceptable if developers can substantiate why transparent models are unsatisfactory.  The EMA encourages use of methods within the field of explainable AI wherever possible.
    • Model deployment: a risk-based approach is required for model deployment.
  • Ethical Principles: developers should follow basic ethical principles defined in the guidelines for trustworthy AI and presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI). They should also take a “human-centric” approach to all development and deployment of AI/ML.
  • Governance, Data Protection and Integrity: Applicants and MAHs also need to consider and reflect governance, data protection and integrity principles.

Next Steps

The EMA will finalize the Draft Reflection Paper following the end of the consultation period.  It also intends to provide additional guidance on risk-management and may update existing guidance to take into account the specific issues that AI/ML pose.   

Given that the Draft Reflection Paper puts the onus on Applicants and MAHs to ensure the algorithms, models, datasets etc. they use are compliant, biopharma companies considering the use of AI/ML should watch this space and keep up to date with upcoming developments. 

On Thursday, July 13, 2023, the Centers for Medicaid & Medicaid Services (CMS) released the Calendar Year (CY) 2024 Physician Fee Schedule (PFS) Proposed Rule (CY 2024 PFS Proposed Rule), which proposes policy changes to the PFS and other Medicare Part B issues, effective on or after January 1, 2024.  As explained in our article, the CY 2024 PFS Proposed Rule includes several policy proposals that would change coverage of and access to telehealth services for Medicare beneficiaries.  Among other policy proposals, the CY 2024 PFS Proposed Rule would:

  • Implement the telehealth provisions in the Consolidated Appropriations Act, 2023;
  • Update the process for adding, removing, or changing services on the Medicare Telehealth Services List;
  • Change the payment methodology for Medicare telehealth services furnished to beneficiaries at home; and
  • Extend direct supervision via use of two-way audio/video communications technology.

Read the full article to learn more about CMS’ proposed changes to Medicare telehealth policy.

Last week, the U.S. Food & Drug Administration (“FDA” or the “Agency”) issued a second discussion paper on the use of artificial intelligence (“AI”) and machine learning (“ML”) with respect to drug and biological products, this time focusing on the use of AI/ML in the drug and biologic development process, “Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products” (“Second Discussion Paper”).[1] The Second Discussion Paper was issued by the Center for Drug Evaluation and Research (“CDER”), Center for Biologics Evaluation and Research (“CBER”), and Center for Devices and Radiological Health (“CDRH”), and from a scope perspective, the Second Discussion Paper discusses the use of AI/ML in drug and biologic development, as well as devices intended to be used in combination with drugs or biologics (including, but not limited to, combination products, companion devices, and complementary devices).

In the Second Discussion Paper and associated press release, FDA recognizes the significance of AI/ML in drug[2] development, citing the more than 100 drug and biological product applications—submitted in 2021 alone—that included AI/ML components, and the areas of drug development where AI/ML efforts are already active, including clinical trial design, use of digital health technologies (“DHTs”), and real-world data (“RWD”) analytics. The Second Discussion Paper does not endorse any specific approaches for the use of AI/ML in drug development, but rather seeks feedback from stakeholders that can help inform the Agency’s future regulatory activities.

This client alert provides a high-level overview of the Second Discussion Paper, as well as areas for potential engagement with the Agency on the use of AI/ML in drug development. Comments on the Second Discussion Paper must be submitted to FDA by August 9, 2023.

Current and Potential Uses of AI/ML in Drug Development

In the Second Discussion Paper, FDA highlights the many ways AI/ML is currently or could potentially be used in the drug development process, including:

  • Drug Discovery: FDA notes that early drug discovery is one of the areas in which sponsors have significant interest in utilizing AI/ML. In particular, FDA discusses the ways in which AI/ML has or can be used in the drug identification, selection, and prioritization process, as well as in the compound screening and design process.
  • Nonclinical Research: FDA discusses the ways in which AI/ML could be leveraged to support nonclinical research. FDA notes, for example, that a recurrent neural network, an ML algorithm, may be used to complement traditional pharmacokinetic / pharmacodynamic models in areas of highly complex data analysis.
  • Clinical Research: FDA observes that one of the “most significant applications of AI/ML” is in efforts to streamline and advance clinical research. For instance, FDA discusses AI/ML’s ability to analyze vast amounts of data and the potential to inform the design and efficiency of non-traditional trials, such as decentralized clinical trials. FDA specifically notes AI/ML’s use in a number of areas related to the conduct of clinical research, including recruitment, dose/dosing regimen optimization, adherence, retention, and site selection.
  • Clinical Trial Data Collection, Management, and Analysis, and Clinical Endpoint Assessment: FDA discusses the ways in which AI/ML could be used to collect, manage, and analyze clinical trial data, including the potential role of DHTs to enable the use of AI/ML in clinical trials, the use of AI/ML to enhance data integration and perform data quality assessments, and the use of AI/ML to analyze complex RWD or to build digital twins of patients to analyze how a patient may have progressed on a placebo versus an investigational treatment. FDA also notes the potential use of AI/ML to detect a possible safety signal, or to assess outcomes captured from diverse sources (e.g., DHTs, social media) during a clinical trial.
  • Postmarketing Safety Surveillance: FDA notes the ways in which post-approval pharmacovigilance can be supported by AI/ML, for instance by case processing (e.g., detecting information from source documents to help identify adverse events for individual case safety report (“ICSR”) submission), case evaluation (e.g., assessing the possibility of a causal relationship between the drug and the adverse event), and case submission (e.g., automating reporting rules for submission of ICSRs).
  • Advanced Pharmaceutical Manufacturing: As noted above, CDER previously issued a discussion paper in March 2023 focused on AI/ML in drug manufacturing. Now, in the Second Discussion Paper, FDA elaborates on the ways in which advanced analytics leveraging AI/ML has already been deployed or has potential to support pharmaceutical manufacturing efforts, including enhancing process controls, increasing equipment reliability, monitoring early warnings that a manufacturing process is not in a state of control, detecting recurring problems, and preventing batch losses. FDA specifically notes the potential for AI/ML, in concert with other advanced manufacturing technologies (such as process analytical technology (“PAT”) and continuous manufacturing) to enhance and modernize pharmaceutical manufacturing, and alleviate supply chain and shortage issues. FDA identifies four specific areas in which AI/ML could be applied throughout the entire product manufacturing lifecycle: (1) optimization of process design (e.g., use of digital twins in process design optimization); (2) advanced process control implementation; (3) smart monitoring and maintenance; and (4) trending activities (such as trending of deviations, root causes, and CAPA effectiveness).

Considerations for the Use of AI/ML in Drug Development and Opportunities for Engagement with FDA

FDA acknowledges the potential for AI/ML to accelerate the drug development process and make clinical trials safer and more efficient. The Second Discussion Paper also acknowledges the need for the Agency to assess whether the use of AI/ML in these contexts introduces unique risks and harms, including the potential for limited explainability due to the complexity or proprietary nature of an AI/ML system, questions about reliability, and the potential for bias.

Accordingly, FDA notes a focus on “developing standards for trustworthy AI that address specific characteristics in areas such as explainability, reliability, privacy, safety, security, and bias mitigation.” To help address these issues, FDA intends to consider the applicability of certain overarching standards and practices for the general application of AI/ML, and to seek feedback from stakeholders to help identify specific good practices with respect to AI/ML in the context of drug development.

Overarching Standards and Practices for the Use of AI/ML

FDA intends to explore the potential utility and applicability of overarching standards and practices for the use of AI/ML that are not specific to the drug development context. These include AI/ML principles outlined in federal executive orders, the AI Plan developed by the National Institute for Standards and Technology, and AI/ML standards established by standards organizations. The Second Discussion Paper also acknowledges the potential usefulness of the Agency’s frameworks for software as a medical device (“SaMD”), such as an April 2019 discussion paper that proposed a regulatory framework for modifications to AI-based SaMD, a January 2021 AI “Action Plan” for SaMD, and October 2021 guiding principles to inform the development of Good Machine Learning Practice for AI/ML-based medical devices. It seems likely that the Agency will leverage some principles from these sources in developing AI/ML standards for drug development and the development of devices intended to be used with drugs.

Opportunity for Engagement: Request for Feedback

Although the above-referenced, overarching standards may serve as a useful starting point, FDA seeks feedback from stakeholders that highlights additional or unique considerations for AI/ML deployed in the drug development context. Specifically, FDA solicits feedback on three key areas: (1) human-led governance, accountability, and transparency; (2) quality, reliability, and representativeness of data; and (3) model development, performance, monitoring, and validation. The Agency outlines specific questions within each of these areas in the Second Discussion Paper.

  • With respect to human-led governance, accountability, and transparency, FDA emphasizes the value of governance and accountability in developing trustworthy AI. The Agency seeks feedback about specific use cases in drug development that have the greatest need for regulatory clarity, what transparency means in the use of AI/ML in drug development, the barriers and facilitators of transparency in these contexts, and good practices for providing risk-based, meaningful human involvement.
  • With respect to quality, reliability, and representativeness of data, FDA acknowledges that ensuring “data quality, reliability, and that the data are fit for use (i.e., relevant for the specific intended use and population) can be critical,” and highlights data-related issues such as bias, completeness and accuracy of data, privacy and security, record trails, relevance, replicability, reproducibility, and representativeness. FDA solicits feedback on key practices utilized by stakeholders to help address these issues.
  • Finally, with respect to model development, performance, monitoring, and validation, FDA highlights the importance of evaluating AI/ML models over time to consider the model risk and credibility. For example, FDA acknowledges that there may be overall advantages to selecting a more traditional and parsimonious (i.e., fewer parameters) model over complex models where the models perform similarly. Additionally, the Second Discussion Paper states it may be important to examine corrective actions and real-world performance, conduct postmarket surveillance, verify the software code and calculations, and evaluate the applicability of validation assessments to the context of use. FDA solicits feedback on examples of tools, processes, approaches, and best practices being used by stakeholders to monitor and develop AI/ML models.

Submitting feedback on these questions is an important opportunity to help develop the standards that govern the use of AI/ML in drug development. The comment period closes on August 9, 2023.

Other Opportunities for Engagement

FDA also is coordinating a number of mechanisms for stakeholders to engage with the Agency on AI/ML in drug development, such as a workshop with stakeholders, public meetings, and further Critical Path Innovation, ISTAND Pilot Program, Emerging Technology Program, and Real-World Evidence Program meetings. FDA views these efforts and collaborations as providing “a foundation for a future framework or guidance.” Stakeholders should watch closely for these opportunities.


[1] For a summary and analysis of FDA’s first discussion paper, which focused on the use of AI in drug manufacturing, please see our prior blog post, “FDA Seeks Comments on Agency Actions to Advance Use of AI and Digital Health Technologies in Drug Development.” The first discussion paper, “Artificial Intelligence in Drug Manufacturing,” was issued by the Center for Drug Evaluation and Research (CDER), and is available at https://www.fda.gov/media/165743/download.

[2] For purposes of the Second Discussion Paper, FDA states that all references to “drug” or “drugs” include both human drugs and biological products.