Earlier today, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.

  • Safety & Security StandardsThe Executive Order sets out several required actions for developers of AI systems.  Notably, the White House, “in accordance with the Defense Production Action,” will require companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government. Relatedly, the Executive Order directs certain federal agencies to undertake the following actions and initiatives:
    • National Institute of Standards and Technology:  establish standards for red-teaming required before the public release of an AI system. 
    • Department of Homeland Security:  apply the NIST standards to use of AI in critical infrastructure sectors and establish an AI Safety and Security Board. 
    • Departments of Energy and Homeland Security:  address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks; it also calls for the creation of standards for biological synthesis screening.
    • Department of Commerce:  develop guidance for content authentication and watermarking to label content generated by AI and received by the government; it also suggests that federal agencies would be required to use these tools.
    • National Security Council & White House Chief of Staff:  develop a National Security Memorandum that ensures that the United States military and intelligence community use AI safely, ethically, and effectively.
  • Privacy.  The Executive Order sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques; strengthening privacy-preserving research and technologies (e.g., cryptographic tools); evaluating how agencies collect and use commercially available information containing personal data (including information procured from data brokers); and developing guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques.
  • Equity and Civil RightsTo address concerns around equity and civil rights, the Executive Order sets forth a number of new requirements for documentation, coordination, and best practices.  Specifically, the Executive Order calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The Executive Order also requires training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations.  Additionally, the Executive Order calls for the creation of best practices on the use of AI in sentencing, parole, probation, and related topics for the criminal justice system.
  • Consumers, Patients, and StudentsRecognizing that AI can have unique impacts on consumers, patients, and students, the Executive Order directs agencies and certain sectors to engage in the following activities: 
    • Healthcare:  To help advance the responsible use of AI in healthcare and the life sciences sector, including for drug development, the Department of Health and Human Services must establish a safety program to receive reports of, and act to remedy, harms or unsafe AI healthcare practices (see our prior post about FDA’s thinking on the use of AI and digital health technologies in drug development here).
    • Education:  The Executive Order calls for the creation of resources to support educators deploying AI-enabled tools.
    • Workers:  After acknowledging that AI offers the promise of improved productivity, the Executive Order requires the development of principles and best practices to mitigate harms and maximize benefits for workers, including by addressing job displacement and guidance for evaluating job applications fairly.  It also requires the production of a report on AI’s potential labor-market impacts.
  • Innovation and Competition.  The Executive Order sets forth a number of priorities to ensure American leadership in AI innovation, such as the creation of a pilot of the National AI Research Resource, a tool that will provide AI researchers and students with key AI resources and data and grant programs.  The Executive Order also discusses updates to the immigration process for individuals with expertise in critical AI areas.  Importantly, the Executive Order directs the Federal Trade Commission to exercise its legal authority to encourage a competitive AI ecosystem.
  • Leadership on International Frameworks.  The Executive Order directs the State Department and Department of Commerce to lead an effort to establish international frameworks governing AI.  These efforts are intended to support and compliment the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.
  • Government Use of AIThe Executive Order recognizes the potential for AI systems to drive better results in the delivery of government services to citizens and directs certain actions to ensure the responsible deployment of AI by government entities.  For example, it requires guidance for agencies’ use of AI, improvements to the process for government acquisition of AI, and the hiring of AI talent by governments.

We are continuing to review and digest the content of the Executive Order and expect to supplement this post with additional observations across our blogs.

On September 27, 2023, Governor Newsom signed AB 254 and AB 352, which both amend the California Confidentiality of Medical Information Act (“CMIA”).  Specifically, AB 254 expands the scope of the CMIA to expressly cover reproductive or sexual health services that are delivered through digital health solutions and the associated health information generated from these services.  AB 352 imposes new requirements on how electronic health record (“EHR”) systems must store medical information related to gender affirming care, abortion and related services, and contraception and the ability of providers of health care, health care service plans, contractors, or employers to disclose such information.

Continue Reading California Enacts Amendments to the CMIA

On September 15, the Federal Trade Commission (“FTC”) and U.S. Department of Health and Human Services (“HHS”) announced an updated joint publication describing the privacy and security laws and rules that impact consumer health data.  Specifically, the “Collecting, Using, or Sharing Consumer Health Information? Look to HIPAA, the FTC Act, and the Health Breach Notification Rule” guidance provides an overview of the Health Insurance Portability and Accountability Act, as amended, and the implementing regulations issued by HHS (collectively “HIPAA”); the FTC Act; and the FTC’s Health Breach Notification Rule (“HBNR”) and how they may apply to businesses.  This joint guidance follows a recent surge of FTC enforcement in the health privacy space.  We offer a high level summary of the requirements flagged by the guidance.

Continue Reading FTC and HHS Announce Updated Health Privacy Publication

On September 6, Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor and Pensions (HELP) Committee, issued a white paper about the oversight and legislative role of Congress related to the deployment of artificial intelligence (AI) in areas under the HELP Committee’s jurisdiction, including health and life sciences.  In the white paper, Senator Cassidy disfavors a one-size-fits-all approach to the regulation of AI and instead calls for a flexible approach that leverages existing frameworks depending on the particular context of use of AI.  “[O]nly if our current frameworks are unable to accommodate . . . AI, should Congress look to create new ones or modernize existing ones.”  The Senator seeks public feedback on the white paper by September 22, 2023.  Health care and life sciences stakeholders should consider providing comments. 

This blog outlines five key takeaways from the white paper from a health care and life sciences perspective. Note that beyond health and life sciences issues, the white paper also addresses considerations for other areas, such as use of AI in educational settings and labor/employment implications created by use of AI.


5 Key Takeaways for AI in Health Care and Life Sciences

The white paper – entitled “Exploring Congress’ Framework for the Future of AI: The Oversight and Legislative Role of Congress Over the Integration of Artificial Intelligence in Health, Education, and Labor” – describes the “enormous good” that AI in health care presents, such as “the potential to help create new cures, improve care, and reduce administrative burdens and overall health care spending.”  At the same time, Senator Cassidy notes that AI presents risks that legal frameworks should seek to minimize.  Five key takeaways from the white paper include:

  1. Senator Cassidy emphasizes that a one-size-fits all approach will not work for AI and effectively grounds many of the broader, ongoing AI policy considerations into the HELP Committee’s core expertise and existing regulatory frameworks for health-related AI.  Many of the emerging frameworks for defining trustworthy or responsible AI and establishing AI risk management practices are drafted broadly for all uses of AI and may not reflect that AI applications in different sectors present unique challenges.  Leveraging the HELP Committee’s expertise around health regulatory frameworks (as well as other sectors within the Committee’s jurisdiction and expertise), the white paper concludes that the context of use greatly affects how policymakers should think about AI’s benefits and risks.  In other words, the white paper recognizes that AI deployed in healthcare settings requires a different regulatory approach compared to AI deployed in educational or employment settings.  Senator Cassidy expresses that a “sweeping, one-size-fits-all approach for regulating AI will not work and will stifle, not foster, innovation.” 
  2. Changes to FDA’s device framework may be on the horizon.  The white paper expresses that Congress should only look to modernize existing frameworks (or create new ones) if current frameworks are unable to accommodate AI.  For example, the white paper acknowledges that the existing framework for preclinical and clinical investigation of new drugs is “generally well-suited to adapt to the use of AI to research and develop new drugs.”  In contrast, Senator Cassidy specifically notes that FDA’s medical device framework was not designed to accommodate AI that may improve over time, signaling potential future action by the HELP Committee to amend relevant provisions of the Federal Food, Drug, and Cosmetic Act (FDCA) to clarify how FDA will treat medical devices that integrate AI. 
  3. There are a variety of applications of AI that will benefit the healthcare and life sciences sector and, ultimately, the public health, including:
    • Pharmaceutical research and development, such as with disease target and drug candidate identification and/or design;
    • Diagnostic and treatment applications, from early disease detection to AI applications intended to help identify and reduce medical errors;
    • Patient- and provider-facing support, including internally-developed clinical decision support (CDS) algorithms and AI interfaces that engage directly with patients;
    • Health care administration and coverage, including claims management, surgical scheduling, generation of replies to patient messages, summarization of patient medical histories, and translation between languages and reading levels for patient materials; and
    • Use of AI to increase the speed and efficiency of FDA’s review processes.
  4. The acknowledgement of these important use cases in the health and life sciences sector leaves open FDA regulatory questions.  For example:
    • As noted above, the white paper is fairly explicit on the point that changes to FDA’s regulatory framework may be required to address AI, but Senator Cassidy leaves open for comment what specific types of changes might need to be made.
    • For AI that does not meet the definition of a medical device (or is subject to enforcement discretion by FDA), Senator Cassidy leaves open for comment how health-related AI should be regulated (e.g., who is responsible for training clinicians before use of certain AI tools described in the white paper, and what standards does such training need to meet).
    • FDA expertise will be critical as AI plays a larger role in health and life sciences, and Senator Cassidy leaves open for comment how Congress should help FDA address these challenges. 
    • Where FDA incorporates AI into its own work, including premarket review processes, the white paper leaves open how sponsors and the public will know what review elements are being performed by AI and whether a unique process will be needed to appeal AI-based decisions within the Agency.
  5. Bias and transparency continue to be front-burner issues.  The discussion of bias and transparency in the white paper confirms that Congress is still focused on how to manage these issues in AI regulation.  The white paper states that AI tools should be developed in a transparent way that provides an understanding about how any given algorithm was designed, but leaves open for comment what specific guidelines and steps should satisfy this need.  The white paper also notes how any framework must build in a “clear method to measure effectiveness” and that Congress may need to consider how to best ensure that AI-enabled products do not give undue weight to potential biases. 

Bonus Takeaway: Healthcare applications for AI may create ambiguities about liability. The white paper states that stakeholders need a clear understanding of potential liability around the use of AI. Specifically, the white paper highlights open questions about liability assignment between the original developer, the most recent developer, clinicians, or others.

Request for Stakeholder Feedback

Recognizing that the “insights of stakeholders that can describe the advantages and drawbacks of AI in our health care system . . . are critical as policy makers grapple with this topic,” Senator Cassidy requests “feedback and comments for ways to improve the framework in which these technologies are developed, reviewed, and used” by Friday, September 22.  Although feedback is not confined to these topics, the white paper poses the following questions for consideration specific to health care:

Supporting Medical Innovation:

  • How can FDA support the use of AI to design and develop new drugs and biologics?
  • What updates to the regulatory frameworks for drugs and biologics should Congress consider to facilitate innovation in AI applications?
  • How can FDA improve the use of AI in medical devices?
  • What updates to the regulatory frameworks for medical devices should Congress consider to facilitate innovation in AI applications while also ensuring that products are safe and effective for patients?
  • How can Congress help FDA ensure that it has access to the expertise required to review products that are developed using AI or that incorporate AI?
  • How can FDA better leverage AI to review product submissions?
  • How can FDA harness external expertise to support review of products that are developed using AI or that incorporate AI?
  • What are the potential consequences of regulating AI in the United States if it remains unregulated in other countries?

Medical Ethics and Protecting Patients:

  • What existing standards are in place to demonstrate clinical validity when leveraging AI?  What gaps exist in those standards?
  • What practices are in place to mitigate bias in AI decision-making?
  • What should be the federal role, if any, in addressing social and/or political bias?
  • How can AI be best adopted to not inappropriately deny patients care?
  • Is the current HIPAA framework equipped to safeguard patient privacy with regards to AI in clinical settings?  If not, how not or how to better equip the framework?
  • What standards are in place to ensure that AI maintains respect and dignity for human life from conception to natural death?
  • Who should be responsible for determining safe and appropriate applications of AI algorithms?
  • Who should be liable for unsafe or inappropriate applications of AI algorithms?  The developer?  A regulating body?  A third party or private entity?

Academic and commercial researchers using patient data are increasingly turning to real world data to gain broader insights into patient care and treatment. The UK is uniquely positioned to unlock the potential research, innovation and social value of health data generated by the NHS, and until recently, there has not been a unified approach to academic and commercial collaborations involving patient data.

The newly released guide to effective NHS data partnerships from NHS England (the “Guidance”), recognises the significant social and economic value that data partnerships can offer, and aims to provide guidance to NHS organisations on establishing data partnerships in order to promote consistency and streamline approaches across the NHS.

The Guidance relates to multiple different types of data partnerships including third-party requests to access NHS data assets for research and development purposes, validation of a solution to a healthcare problem requiring data access, or an NHS organisation reaching out to third parties to use data to solve a particular challenge.

Continue Reading NHS Data Partnerships Guide – unlocking the benefits of data partnerships

Hot on the heels of recent announcements from the U.S. Food and Drug Administration (see our prior blogs here), the European Medicines Agency (“EMA”) has joined the conversation on the use of Artificial Intelligence (“AI”) and Machine Learning (“ML”) technologies in the medicinal product lifecycle.

AI and ML have the potential to enhance every stage of the medicinal product lifecycle, from drug discovery, through to clinical development, manufacturing and post-market pharmacovigilance.  These technologies can display intelligent behaviour and can analyse huge amounts of data.  They are also extremely flexible as they can be trained using data, rather than explicit programming.  When used correctly, AI and ML can “effectively support the acquisition, transformation, analysis, and interpretation of data within the medicinal product lifecycle.”

However, the nature of these technologies also leads to certain risks.  Importantly, there can be a lack of transparency in the models.  Also, the data-driven approach means they can be prone to bias.  The EMA has therefore published a draft “Reflection paper on use of Artificial Intelligence (AI) in medicinal product lifecycle” (the “Draft Reflection Paper”), which is open to consultation until 31 December 2023.  The EMA sees the Draft Reflection Paper as a way to open “a dialogue with developers, academics, and other regulators.” 

What does the Draft Reflection Paper cover?

The Draft Reflection Paper sets out the EMA’s current thinking on the use of AI to “support the safe and effective development, regulation and use of … medicines.”  It applies primarily to human medicines, noting that while similar principles apply to veterinary medicines specific reflections/guidance are needed for the veterinary space. 

The purpose of the Draft Reflection Paper is to identify use of AI/ML that fall within the EMA’s/National Competent Authorities’ remit.  This obviously includes the use of AI in the medicinal product lifecycle but also extends to the use of medical devices with AI/ML technology that are used to generate evidence to support an EU marketing authorisation (i.e., used within the context of clinical trials or combined with the use of a medicinal product). 

Use of AI/ML in the medicines lifecycle

The EMA highlights as a “key principle” that marketing authorisation applicants (“Applicants”) and marketing authorisation holders (“MAH”) will bear responsibility for ensuring AI/ML they use is “fit for purpose and are in line with ethical, technical, scientific, and regulatory standards as described in GxP standards and current EMA scientific guidelines.” 

In summary, the Draft Reflection Paper requires that Applicants take a “risk-based approach for development, deployment and performance monitoring of AI and ML tools.”  The degree of risk will be determined by a number of factors, including: the AI technology itself; the context of use; the degree of influence of the AI/ML technology; and the stage of lifecycle of the medicinal product. 

The Draft Reflection Paper considers use of AI/ML at different stages along the product lifecycle and sets out principles and an indication of risk of applying AI/ML at each such stage:

  • Drug discovery — the EMA acknowledges that the use of AI/ML in drug discovery may be low risk from a regulatory perspective, “as the risk of non-optimal performance often mainly affects the sponsor.”  However, if results contribute to the total body of evidence presented for regulatory review then the regulatory risk increases.
  • Non-clinical development — AI/ML (e.g,AI/ML modelling approaches to replace, reduce, and refine the use of animals”) should follow Good Laboratory Practice (“GLP”), where applicable.  Applicants should consider Application of GLP Principles to Computerised Systems and GLP Data Integrity and their SOPs should cover AI/ML.
  • Clinical trials — AI/ML models (for example, that support selection of patients based on disease characteristics or clinical parameters) must comply with ICH GCP.  The regulatory risk for use of AI/ML increases from early stage to pivotal clinical trials.  Where models are generated for clinical trials, it is likely they will be considered part of the clinical trial data or trial protocol dossier and the models must be made available for regulators to assess at the time of marketing authorisation or clinical trial application. Where data collected/generated with AI/ML may impact the regulatory assessment of a medicine, the EMA recommends early regulatory interaction.
  • Precision medicine — the EMA considers the use of AI/ML in individualizing treatment (e.g., patient selection, dosing, de novo design of product variants) as high-risk from a medicines regulation perspective. The EMA recommends “special care … in defining what constitutes a change inposology (requiring a regulatory evaluation before implementation), to provide guidance that the prescribers can critically apprehend, and include fall-back treatment strategies in cases of technical failure.
  • Product information — AI/ML might be used to draft, compile, translate or review information documents.  Recognizing the risk of hallucinations (which may be plausible but erroneous output) by generative language models, the EMA expects use of such technologies only under “close human supervision.
  • Manufacturing — use of AI/ML in drug manufacturing is expected to increase in the future and the EMA notes that this must comply with relevant quality management principles.
  • Post-authorization phase — AI/ML is likely to have potential to support post-authorization safety and efficacy studies in human medicines, plus pharmacovigilance activities, such as adverse event report management and signal detection.  The MAH must “validate, monitor and document model performance and include AI/ML operations in the pharmacovigilance system, to mitigate risks related to all algorithms and models used.

Considerations for use of AI/ML

The Draft Reflection Paper sets out detailed measures that Applicants can take when using AI/ML technologies.  Some key points include:

  • Interacting with regulators: Applicants should carry out a regulatory impact and risk analysis.  The higher the regulatory impact or risk associated with the use of AI/ML technologies, the sooner the EMA recommends the Applicant engages with regulators to seek scientific advice.
  • Technical considerations:
    • Data acquisition: Applicants should use all efforts and active measures to avoid integration of bias in AI/ML and should document the source of data and the process of acquisition in a traceable manner in line with GxP. 
    • Training, validation and test data: the EMA discusses validation of models, which is importantly different from the concept of validation in the field of medicines.
    • Model development: the EMA encourages development and use of generalizable and robust models.
    • Performance Assessments: the Paper highlights the importance of selecting the correct metrics for performance assessments.
    • Interpretability and explainability: although transparent models are preferred, the EMA states that a “black box” model may be acceptable if developers can substantiate why transparent models are unsatisfactory.  The EMA encourages use of methods within the field of explainable AI wherever possible.
    • Model deployment: a risk-based approach is required for model deployment.
  • Ethical Principles: developers should follow basic ethical principles defined in the guidelines for trustworthy AI and presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI). They should also take a “human-centric” approach to all development and deployment of AI/ML.
  • Governance, Data Protection and Integrity: Applicants and MAHs also need to consider and reflect governance, data protection and integrity principles.

Next Steps

The EMA will finalize the Draft Reflection Paper following the end of the consultation period.  It also intends to provide additional guidance on risk-management and may update existing guidance to take into account the specific issues that AI/ML pose.   

Given that the Draft Reflection Paper puts the onus on Applicants and MAHs to ensure the algorithms, models, datasets etc. they use are compliant, biopharma companies considering the use of AI/ML should watch this space and keep up to date with upcoming developments. 

On Thursday, July 13, 2023, the Centers for Medicaid & Medicaid Services (CMS) released the Calendar Year (CY) 2024 Physician Fee Schedule (PFS) Proposed Rule (CY 2024 PFS Proposed Rule), which proposes policy changes to the PFS and other Medicare Part B issues, effective on or after January 1, 2024.  As explained in our article, the CY 2024 PFS Proposed Rule includes several policy proposals that would change coverage of and access to telehealth services for Medicare beneficiaries.  Among other policy proposals, the CY 2024 PFS Proposed Rule would:

  • Implement the telehealth provisions in the Consolidated Appropriations Act, 2023;
  • Update the process for adding, removing, or changing services on the Medicare Telehealth Services List;
  • Change the payment methodology for Medicare telehealth services furnished to beneficiaries at home; and
  • Extend direct supervision via use of two-way audio/video communications technology.

Read the full article to learn more about CMS’ proposed changes to Medicare telehealth policy.

Last week, the U.S. Food & Drug Administration (“FDA” or the “Agency”) issued a second discussion paper on the use of artificial intelligence (“AI”) and machine learning (“ML”) with respect to drug and biological products, this time focusing on the use of AI/ML in the drug and biologic development process, “Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products” (“Second Discussion Paper”).[1] The Second Discussion Paper was issued by the Center for Drug Evaluation and Research (“CDER”), Center for Biologics Evaluation and Research (“CBER”), and Center for Devices and Radiological Health (“CDRH”), and from a scope perspective, the Second Discussion Paper discusses the use of AI/ML in drug and biologic development, as well as devices intended to be used in combination with drugs or biologics (including, but not limited to, combination products, companion devices, and complementary devices).

In the Second Discussion Paper and associated press release, FDA recognizes the significance of AI/ML in drug[2] development, citing the more than 100 drug and biological product applications—submitted in 2021 alone—that included AI/ML components, and the areas of drug development where AI/ML efforts are already active, including clinical trial design, use of digital health technologies (“DHTs”), and real-world data (“RWD”) analytics. The Second Discussion Paper does not endorse any specific approaches for the use of AI/ML in drug development, but rather seeks feedback from stakeholders that can help inform the Agency’s future regulatory activities.

This client alert provides a high-level overview of the Second Discussion Paper, as well as areas for potential engagement with the Agency on the use of AI/ML in drug development. Comments on the Second Discussion Paper must be submitted to FDA by August 9, 2023.

Current and Potential Uses of AI/ML in Drug Development

In the Second Discussion Paper, FDA highlights the many ways AI/ML is currently or could potentially be used in the drug development process, including:

  • Drug Discovery: FDA notes that early drug discovery is one of the areas in which sponsors have significant interest in utilizing AI/ML. In particular, FDA discusses the ways in which AI/ML has or can be used in the drug identification, selection, and prioritization process, as well as in the compound screening and design process.
  • Nonclinical Research: FDA discusses the ways in which AI/ML could be leveraged to support nonclinical research. FDA notes, for example, that a recurrent neural network, an ML algorithm, may be used to complement traditional pharmacokinetic / pharmacodynamic models in areas of highly complex data analysis.
  • Clinical Research: FDA observes that one of the “most significant applications of AI/ML” is in efforts to streamline and advance clinical research. For instance, FDA discusses AI/ML’s ability to analyze vast amounts of data and the potential to inform the design and efficiency of non-traditional trials, such as decentralized clinical trials. FDA specifically notes AI/ML’s use in a number of areas related to the conduct of clinical research, including recruitment, dose/dosing regimen optimization, adherence, retention, and site selection.
  • Clinical Trial Data Collection, Management, and Analysis, and Clinical Endpoint Assessment: FDA discusses the ways in which AI/ML could be used to collect, manage, and analyze clinical trial data, including the potential role of DHTs to enable the use of AI/ML in clinical trials, the use of AI/ML to enhance data integration and perform data quality assessments, and the use of AI/ML to analyze complex RWD or to build digital twins of patients to analyze how a patient may have progressed on a placebo versus an investigational treatment. FDA also notes the potential use of AI/ML to detect a possible safety signal, or to assess outcomes captured from diverse sources (e.g., DHTs, social media) during a clinical trial.
  • Postmarketing Safety Surveillance: FDA notes the ways in which post-approval pharmacovigilance can be supported by AI/ML, for instance by case processing (e.g., detecting information from source documents to help identify adverse events for individual case safety report (“ICSR”) submission), case evaluation (e.g., assessing the possibility of a causal relationship between the drug and the adverse event), and case submission (e.g., automating reporting rules for submission of ICSRs).
  • Advanced Pharmaceutical Manufacturing: As noted above, CDER previously issued a discussion paper in March 2023 focused on AI/ML in drug manufacturing. Now, in the Second Discussion Paper, FDA elaborates on the ways in which advanced analytics leveraging AI/ML has already been deployed or has potential to support pharmaceutical manufacturing efforts, including enhancing process controls, increasing equipment reliability, monitoring early warnings that a manufacturing process is not in a state of control, detecting recurring problems, and preventing batch losses. FDA specifically notes the potential for AI/ML, in concert with other advanced manufacturing technologies (such as process analytical technology (“PAT”) and continuous manufacturing) to enhance and modernize pharmaceutical manufacturing, and alleviate supply chain and shortage issues. FDA identifies four specific areas in which AI/ML could be applied throughout the entire product manufacturing lifecycle: (1) optimization of process design (e.g., use of digital twins in process design optimization); (2) advanced process control implementation; (3) smart monitoring and maintenance; and (4) trending activities (such as trending of deviations, root causes, and CAPA effectiveness).

Considerations for the Use of AI/ML in Drug Development and Opportunities for Engagement with FDA

FDA acknowledges the potential for AI/ML to accelerate the drug development process and make clinical trials safer and more efficient. The Second Discussion Paper also acknowledges the need for the Agency to assess whether the use of AI/ML in these contexts introduces unique risks and harms, including the potential for limited explainability due to the complexity or proprietary nature of an AI/ML system, questions about reliability, and the potential for bias.

Accordingly, FDA notes a focus on “developing standards for trustworthy AI that address specific characteristics in areas such as explainability, reliability, privacy, safety, security, and bias mitigation.” To help address these issues, FDA intends to consider the applicability of certain overarching standards and practices for the general application of AI/ML, and to seek feedback from stakeholders to help identify specific good practices with respect to AI/ML in the context of drug development.

Overarching Standards and Practices for the Use of AI/ML

FDA intends to explore the potential utility and applicability of overarching standards and practices for the use of AI/ML that are not specific to the drug development context. These include AI/ML principles outlined in federal executive orders, the AI Plan developed by the National Institute for Standards and Technology, and AI/ML standards established by standards organizations. The Second Discussion Paper also acknowledges the potential usefulness of the Agency’s frameworks for software as a medical device (“SaMD”), such as an April 2019 discussion paper that proposed a regulatory framework for modifications to AI-based SaMD, a January 2021 AI “Action Plan” for SaMD, and October 2021 guiding principles to inform the development of Good Machine Learning Practice for AI/ML-based medical devices. It seems likely that the Agency will leverage some principles from these sources in developing AI/ML standards for drug development and the development of devices intended to be used with drugs.

Opportunity for Engagement: Request for Feedback

Although the above-referenced, overarching standards may serve as a useful starting point, FDA seeks feedback from stakeholders that highlights additional or unique considerations for AI/ML deployed in the drug development context. Specifically, FDA solicits feedback on three key areas: (1) human-led governance, accountability, and transparency; (2) quality, reliability, and representativeness of data; and (3) model development, performance, monitoring, and validation. The Agency outlines specific questions within each of these areas in the Second Discussion Paper.

  • With respect to human-led governance, accountability, and transparency, FDA emphasizes the value of governance and accountability in developing trustworthy AI. The Agency seeks feedback about specific use cases in drug development that have the greatest need for regulatory clarity, what transparency means in the use of AI/ML in drug development, the barriers and facilitators of transparency in these contexts, and good practices for providing risk-based, meaningful human involvement.
  • With respect to quality, reliability, and representativeness of data, FDA acknowledges that ensuring “data quality, reliability, and that the data are fit for use (i.e., relevant for the specific intended use and population) can be critical,” and highlights data-related issues such as bias, completeness and accuracy of data, privacy and security, record trails, relevance, replicability, reproducibility, and representativeness. FDA solicits feedback on key practices utilized by stakeholders to help address these issues.
  • Finally, with respect to model development, performance, monitoring, and validation, FDA highlights the importance of evaluating AI/ML models over time to consider the model risk and credibility. For example, FDA acknowledges that there may be overall advantages to selecting a more traditional and parsimonious (i.e., fewer parameters) model over complex models where the models perform similarly. Additionally, the Second Discussion Paper states it may be important to examine corrective actions and real-world performance, conduct postmarket surveillance, verify the software code and calculations, and evaluate the applicability of validation assessments to the context of use. FDA solicits feedback on examples of tools, processes, approaches, and best practices being used by stakeholders to monitor and develop AI/ML models.

Submitting feedback on these questions is an important opportunity to help develop the standards that govern the use of AI/ML in drug development. The comment period closes on August 9, 2023.

Other Opportunities for Engagement

FDA also is coordinating a number of mechanisms for stakeholders to engage with the Agency on AI/ML in drug development, such as a workshop with stakeholders, public meetings, and further Critical Path Innovation, ISTAND Pilot Program, Emerging Technology Program, and Real-World Evidence Program meetings. FDA views these efforts and collaborations as providing “a foundation for a future framework or guidance.” Stakeholders should watch closely for these opportunities.


[1] For a summary and analysis of FDA’s first discussion paper, which focused on the use of AI in drug manufacturing, please see our prior blog post, “FDA Seeks Comments on Agency Actions to Advance Use of AI and Digital Health Technologies in Drug Development.” The first discussion paper, “Artificial Intelligence in Drug Manufacturing,” was issued by the Center for Drug Evaluation and Research (CDER), and is available at https://www.fda.gov/media/165743/download.

[2] For purposes of the Second Discussion Paper, FDA states that all references to “drug” or “drugs” include both human drugs and biological products.

On March 15, 2023, the U.S. Food and Drug Administration (FDA or the Agency) issued a draft guidance entitled Electronic Systems, Electronic Records, and Electronic Signatures in Clinical Investigations: Questions and Answers (2023 Draft Guidance). The 2023 Draft Guidance revises the draft guidance for industry the Agency issued in June 2017 entitled Use of Electronic Records and Electronic Signatures in Clinical Investigations under 21 CFR Part 11—Questions and Answers (2017 Draft Guidance). If finalized, the 2023 Draft Guidance will also supersede the May 2007 FDA guidance for industry entitled Computerized Systems Used in Clinical Investigations.

At a high level, the 2023 Draft Guidance clarifies that 21 CFR part 11 (Part 11) applies to new information technology (IT) services that create, modify, maintain, archive, retrieve, or transmit electronic records.  It also imposes on sponsors the responsibility to ensure that the electronic records they submit through IT vendors conform to FDA’s regulatory requirements.  In addition, the 2023 Draft Guidance introduces recommendations for implementing time-stamped audit trails to verify data.

Compared to the 2017 Draft Guidance, the 2023 Draft Guidance features revised recommendations regarding the risk-based approach for validation of electronic systems[1] and provides new definitions of key terms related to digital and electronic research infrastructure.  It also applies to a broader range of regulated studies.[2]

Comments on the 2023 Draft Guidance must be submitted by May 15, 2023.

Background

Over the last few decades, recordkeeping practices in clinical investigations that were traditionally paper-based have transformed into digital enterprises.  As part of this transformation, sponsors have had to grapple with the regulatory standards and requirements that apply to these digitized processes and records—in particular, the requirements under Part 11, which apply to digital records that are created, modified, maintained, archived, retrieved, or transmitted under any records requirement set forth in FDA regulations.

FDA has addressed the scope and applicability of Part 11 in several guidances over the years, including in the context of clinical trials.[3]  Most recently, in the 2017 Draft Guidance, FDA affirmed a narrow and practical interpretation of Part 11 and encouraged a risk-based approach to the validation of electronic systems, implementation of electronic audit trails, and archiving of electronic records.

As technology has continued to evolve, and as sponsors have begun using and managing electronic systems in novel ways in the conduct of clinical investigations, questions about Part 11’s scope and applicability have continued to surface.  The COVID-19 pandemic brought these issues to the forefront for many sponsors and investigators, as they were forced to rely heavily on digital technologies and remote processes to conduct clinical investigations.  Recognizing that FDA’s ability to assess the authenticity, integrity, and reliability of data submitted in support of marketing applications and other submissions hinges on the industry’s alignment on electronic records, electronic systems, and electronic signatures, the Agency issued the 2023 Draft Guidance.

Major Provisions of the 2023 Draft Guidance

The 2023 Draft Guidance is formatted similarly to the 2017 Draft Guidance, and it includes twenty-eight questions and answers.  FDA has grouped these questions and answers into five sections: (A) electronic records, (B) electronic systems owned or controlled by sponsors or other regulated entities, (C) information technology service providers and services, (D) digital health technologies, and (E) electronic signatures.  FDA continues to take a narrow and practical interpretation of Part 11, but it has augmented its risk-based approach to validation through additional considerations and recommendations.

Section A: Electronic Records

Section A covers electronic records used in clinical investigations that fall under the scope of Part 11 requirements.  In the 2023 Draft Guidance, FDA clarifies that Part 11 applies to electronically formatted records from clinical investigations with non-U.S. sites.  The 2023 Draft Guidance also notes that Part 11 applies to electronic records from real-world data sources submitted to FDA as part of a marketing application, even if FDA regulations do not specifically identify such records.[4]

Like the 2017 Draft Guidance, the 2023 Draft Guidance requires that sponsors retain only certified copiesof electronic records.  But the 2023 Draft Guidance defines this term with greater specificity than the 2017 Draft Guidance: a certified copy is a copy of an original paper or electronic record that has been verified by a dated signature or a validated process to have the same information, “including data that describe the context, content, and structure,” as the original.[5]  Unlike the 2017 Draft Guidance, the 2023 Draft Guidance does not deem copies that lack important metadata to be incomplete; instead, it merely stresses the importance of metadata for establishing the authenticity or integrity of certain record types.

Section B: Electronic Systems Owned or Controlled by Sponsors or Other Regulated Entities

Section B of the 2023 Draft Guidance describes recommendations for electronic systems that are owned or controlled by sponsors or other regulated entities and are used to produce required records in clinical investigations.  Such systems include electronic case report forms (eCRFs), electronic data capture (EDC) systems, electronic trial master files (eTMFs), electronic clinical data and trial management systems (eCDMS, eCTMS), interactive response technology (IRT) systems, and electronic IRB management systems.

The 2023 Draft Guidance expands upon the 2017 Draft Guidance’s risk-based approach to validation of electronic systems used in clinical investigations.  The 2023 Draft Guidance outlines three considerations when applying this risk-based approach based on the nature of the electronic system: (1) the purpose and significance of the record and the criticality of the data, i.e., how the record and data will be used to support the regulatory decision and/or ensure patient safety; (2) the intended use of the electronic system, e.g., whether the electronic system is used to process records that are essential to the clinical investigation; and (3) whether the electronic system is a commercial off-the-shelf (COTS) system or a new, custom-made system.

In conjunction with the third consideration, the 2023 Draft Guidance clarifies that the extent of validation for COTS office utility software should be guided by both the organization’s internal business practices and the intended use of the software in the clinical investigation.  As a general matter, FDA does not anticipate that validation will be necessary for COTS office utility software used as intended by the manufacturer.  Conversely, validation of new custom-made electronic systems should include a review of the vendor’s SOPs, the system and software development life cycle model, validation documentation, change control procedures, and change control tracking logs.  In addition, regulated entities should perform user acceptance testing.  

Although FDA states that it does not intend to review sponsors’ audit reports of IT service providers’ systems, products, and services, it also states that such documentation should be retained as part of the clinical investigation records and made available for inspection by FDA.  The 2023 Draft Guidance broadly defines IT service providers to mean a vendor who provides data hosting and/or computing services—e.g., software as a service, platform as a service, and infrastructure as a service—to sponsors and other regulated entities.

With respect to inspections of sponsors, the 2023 Draft Guidance diverges from and elaborates on the 2017 Draft Guidance.  Whereas the 2017 Draft Guidance outlined only three pieces of information that FDA would review during inspections of sponsors’ electronic systems to determine Part 11 compliance, the 2023 Draft Guidance spends over a full page detailing the precise documentation FDA intends to review.  Such documentation includes audit trail information and interoperable data standards; procedures for data collection, handling, and management; systems for restricting access to electronic systems; change control procedures; vendor contracts; corrective and preventive actions; internal and external audits of electronic systems and of vendors that are performed or provided by the sponsor or independent consultants; and roles and responsibilities of sponsors, clinical sites, and other parties with respect to the use of electronic systems in the clinical investigation.

With respect to inspections of other regulated entities, such as CROs or investigators, the 2023 Draft Guidance encourages such entities to retain electronic systems information demonstrating that those systems comply with the requirements of Part 11.  Such information includes policies and procedures related to the system account setup and management, access controls and user access privileges, system user manuals, and system training materials and records.  During clinical site inspections, FDA will review records related to staff training on the use of electronic systems; procedures and controls in place for system access, data creation, data modification, and data maintenance; and the use of electronic systems at the clinical investigator site to generate, collect, transmit, and archive data.

Section C: Information Technology Service Providers and Services

Section C explores considerations for determining the suitability of information technology service providers and services.  While the 2017 Draft Guidance posited that sponsors and other regulated entities would broadly use mobile technology during the course of clinical investigations, the 2023 Draft Guidance enumerates three types of IT services routinely provided for sponsors: data hosting, cloud computing software, and platform and infrastructure services.

The 2023 Draft Guidance recommends sponsors enter into written service level agreements with IT service providers that describe, at a minimum, the scope of the work and IT service being provided, the parties’ roles and responsibilities with respect to quality and risk management, and details regarding data access.  In addition, the 2023 Draft Guidance clarifies that sponsors are responsible for any duties and functions related to the clinical investigation not specifically transferred to a service provider via a transfer of regulatory obligation (TORO).

Compared to the 2017 Draft Guidance, the 2023 Draft Guidance offers greater clarity regarding the circumstances under which FDA would choose to inspect IT service providers.  For instance, the Agency notes it can request to conduct focused investigations of IT service providers irrespective of whether a TORO is established.  Such an investigation could be triggered by a specific regulatory concern, such as a concern regarding the integrity of trial data.  The Agency also clarifies that sponsors should have access to all study-related records maintained by IT service providers, since FDA may review those records during a sponsor inspection.

Section D: Digital Health Technologies[6]

Section D discusses the use of digital health technologies (DHTs) for remote data acquisition from participants in clinical investigations evaluating medical products.  DHTs encompass a broader range of technologies than the mobile technologies discussed in the 2017 Draft Guidance.  Mobile technologies included mobile platforms, mobile applications, wearable biosensors, other remote sensors, and portable and implantable electronic devices, while DHTs incorporate all computing platforms, connectivity, software, and sensors for health care and related uses.

In the 2023 Draft Guidance, FDA recommends that data originators transmit data captured from DHTs and any relevant associated metadata into a durable electronic data repository, such as a clinical investigation site database, a vendor database, or an EDC system.  Transmission should occur as soon as possible after data generation, and the audit trail should include the date and time of such transmission.

Unlike the 2017 Draft Guidance, the 2023 Draft Guidance affirms that FDA may verify the data sponsors submit in support of applications or submissions against the electronic source data during inspections.  FDA recommends that sponsors allow for the inspection, review, and copying of such records in human readable form.

Section E: Electronic Signatures

Section E examines methods for creating valid electronic signatures in connection with clinical investigations.  The 2023 Draft Guidance’s discussion of electronic signatures largely echoes the 2017 Draft Guidance.  However, the 2023 Draft Guidance acknowledges that a statement of testament suffices in situations where electronic signatures cannot be placed in a specified signature block.  It also clarifies that FDA considers signatures drawn with a finger or an electronic stylus on a mobile platform or other electronic system to be a handwritten electronic signature, and that these types of signatures are valid only if placed on the electronic document exactly as they would appear on a printed document.  

*    *    *

If you have any questions concerning the material discussed in this client alert, please contact the following members of our Food, Drug, and Device practice:

Wade Ackerman+1 424 332 4763ackermanw@cov.com
Scott Cunningham+1 415 591 7089scunningham@cov.com
Paula Katz+1 202 662 5050pkatz@cov.com
Julia Post+1 202 662 5249jpost@cov.com
Christina Kuhn+1 202 662 5653ckuhn@cov.com
Emily Statham+1 202 662 5064estatham@cov.com


[1] The 2023 Draft Guidance does not provide comprehensive detail on how to perform a risk assessment. However, the Agency notes there are many risk assessment methodologies from a variety of industries that can be applied, including the ICH guidance for industry entitled Q9(R1) Quality Risk Management (June 2022) and the International Organization for Standardization’s (ISO’s) standard ISO 31010:2019 Risk management – Risk assessment techniques.

[2] The 2023 Draft Guidance was issued by a broader range of Centers at FDA than prior guidance on this topic. Whereas the 2017 Draft Guidance was issued by the Center for Drug Evaluation and Research (CDER), the Center for Biologics Evaluation and Research (CBER), and the Center for Devices and Radiological Health (CDRH), the 2023 Draft Guidance was also issued by the Centers for Food Safety and Applied Nutrition (CFSAN), Tobacco Products (CTP), Veterinary Medicine (CVM), Office of Regulatory Affairs (ORA), and Office of Clinical Policy (OCLiP).

[3] See, e.g., FDA, Use of Electronic Health Record Data in Clinical Investigations (2018); FDA, Use of Electronic Records and Electronic Signatures in Clinical Investigations under 21 CFR Part 11 — Questions and Answers (2017); FDA, Electronic Source Data in Clinical Investigations (2013); FDA, Computerized Systems Used in Clinical Investigations (2007); FDA, Part 11, Electronic Records; Electronic Signatures — Scope and Application (2003).

[4] See, e.g., FDA 2023 Draft Guidance 4 n. 20 (2023); FDA, Use of Electronic Health Record Data in Clinical Investigations: Guidance for Industry (2018); FDA, Electronic Source Data in Clinical Investigations: Guidance for Industry (2013).

[5] The 2017 Draft Guidance described a certified copy as being a verified copy that has “all of the same attributes and information as the original.”

[6] In the 2023 Draft Guidance, FDA cross-references its January 2022 draft guidance for industry, investigators, and other stakeholders entitled Digital Health Technologies for Remote Data Acquisition in Clinical Investigations (2022 Draft DHT Guidance). The 2023 Draft Guidance reflects many of the DHT principles contained in the 2022 Draft DHT Guidance.

On March 23, 2023, FDA released a Framework for the use of digital health technologies in drug and biological product development (the “DHT Framework”).  This DHT Framework is on the heels of a Discussion Paper the Agency released earlier this month on the use of artificial intelligence (AI) in drug manufacturing to seek public input on issues of critical focus (the “AI Discussion Paper”).  While both actions are significant, the AI Discussion Paper is one of CDER’s few policy statements related to the deployment of AI around regulated activities (though the Center did establish an AI steering committee in 2020).  CDRH, on the other hand, has issued several policy documents around AI-based software potentially regulated as “software as a medical device” (SaMD), including through an April 2019 discussion paper that proposed a regulatory framework for modifications to AI-based SaMD, an AI “Action Plan” for SaMD in January 2021, and guiding principles to inform the development of Good Machine Learning Practice for AI-based medical devices in October 2021.  CDER has requested public comment on the recent DHT Discussion Paper and AI Framework by May 1 and 23, respectively.

I. The AI Discussion Paper

The AI Discussion Paper recognizes the potential value that AI offers in the pharmaceutical industry, including optimizing process design and control, smart monitoring and maintenance of equipment, and trend monitoring of consumer complaints to drive continuous improvement.  In order to advance the use of AI for such purposes, the Agency seeks public feedback on specific issues as the Agency considers “the application of its risk-based regulatory framework to the use of AI technologies in drug manufacturing.”  The paper indicates that it is not exhaustive of all considerations that may be important in developing an AI policy and regulatory framework, for example, difficulties that could result from potential ambiguity on how to apply existing Current Good Manufacturing Practice (CGMP) regulations to AI. Rather, the scope of the paper is focused on five critical areas identified as follows:

  1. Cloud applications may affect oversight of pharmaceutical manufacturing data and records.  The AI Discussion Paper notes that use of a third-party AI platform or service may lead to challenges in ensuring that the third-party creates and updates AI software with appropriate safeguards for data safety and security.  Additionally, FDA posits that ongoing interactions between cloud applications and process controls could complicate the ability to establish data traceability and create potential cybersecurity vulnerabilities.  The AI Discussion Paper does not opine on or ask specific questions regarding the level of diligence a drug sponsor should reasonably perform with third-party vendors. 
  2. The Internet of Things (IoT) may increase the amount of data generated during pharmaceutical manufacturing, affecting existing data management practices.  On the one hand, AI solutions provide tremendous value by allowing drug manufacturers to capture a broader array of data than is typically available through non-AI methods.  But FDA acknowledges that if the raw data collected during the manufacturing process increases in both frequency and types of data recorded, there may be a need to balance data integrity and retention with the logistics of data management.  For example, applicants could need further clarity from regulators on which data need to be stored and/or reviewed and how loss of these data would impact future quality decisions.  FDA also flagged the potential challenges in storing the data in a structured manner that enables retrieval and analysis.
  3. Applicants may need clarity from FDA about whether and how the application of AI in pharmaceutical manufacturing is subject to regulatory oversight, especially as AI could be used in various manufacturing operations that are the focus of FDA oversight, such as monitoring and maintaining equipment, identifying areas for continuous improvement, scheduling and supply chain logistics, and characterizing raw materials.
  4. Applicants may need standards for developing and validating AI models used for process control and to support release testing.  FDA has published guidance and spoken on the verification, validation, and usability of digital health technologies for remote data acquisition in clinical investigations, but the AI Discussion Paper recognizes the limited availability of industry standards and FDA guidance for the development and validation of AI models that impact product quality, particularly in the drug manufacturing sphere.  This lack of guidance may create challenges in establishing the credibility of a model for a specific use, especially as AI methods for drug manufacturing become more complex.  For example, the AI Discussion Paper notes that applicants may need clarity regarding how the potential to transfer learning from one AI model to another can be factored into model development and validation.  This area for discussion suggests that FDA guidance on this issue may be forthcoming, but the AI Discussion Paper does not delve into details of the content of any future guidance or any positions the Agency may take (e.g., how the Agency would recommend allocating implementation responsibilities across AI developers and deployers).
  5. Continuously learning AI systems that adapt to real-time data may challenge regulatory assessment and oversight.  On the device side, Congress recently amended the FDCA to create a more streamlined and efficient process for software-based medical device submissions that could undergo a number of foreseeable AI-informed modifications throughout the total product lifecycle by allowing for the submission of “predetermined change control plans,” and the Agency has issued a draft guidance on this topic.  This area for consideration in the AI Discussion Paper signals that the Agency is now thinking of these issues on the drug side as well, recognizing that applicants may need clarity on the approach for FDA’s examination of continuously updated AI control models during a site inspection and for establishing product comparability after changes to manufacturing conditions introduced by the AI model.

Leveraging the above areas for consideration, the AI Discussion Paper asks the following questions:

  1. What types of AI applications do you envision being used in pharmaceutical manufacturing?
  2. Are there additional aspects of the current regulatory framework (e.g., aspects not listed in the “areas for consideration”) that may affect the implementation of AI in drug manufacturing and should be considered by FDA?
  3. Would guidance in the area of AI in drug manufacturing be beneficial?  If so, what aspects of AI technology should be considered?
  4. What are the necessary elements for a manufacturer to implement AI-based models in a CGMP environment?
  5. What are common practices for validating and maintaining self-learning AI models and what steps need to be considered to establish best practices?
  6. What are the necessary mechanisms for managing the data used to generate AI models in pharmaceutical manufacturing?
  7. Are there other aspects of implementing models (including AI-based models) for pharmaceutical manufacturing where further guidance would be helpful?
  8. Are there aspects of the application of AI in pharmaceutical manufacturing not covered in this document that FDA should consider?

II. The DHT Framework

In connection with the commitments FDA made as part of user fee reauthorizations, FDA established a framework to guide the use of data derived from digital health technologies (DHTs) in regulatory decision-making for drug and biological products.  The Agency has emphasized the importance of DHTs in drug development and made advancing the use of DHTs a front burner issue, also releasing draft guidance in December 2021 encouraging drug sponsors to make use of voluntary qualification programs to allow for reliance on DHTs in multiple clinical investigations for different premarket submissions.  The DHT Framework focuses on both internal programs to support DHT-related activities within FDA, and external programs to engage industry and other stakeholders.  Specifically, with respect to internal programs:

  1. DHT Steering Committee.  FDA has established a DHT Steering Committee to oversee implementation of Agency commitments related to evaluating DHT-based measurements in regulatory submissions.  We recommend monitoring this Committee’s activities and watching for developments, as the Committee may make policy recommendations that impact the use and evaluation of DHT-based measurements in drug and biological product development.
  2. Technical Expertise and Training.  FDA stated it will develop training within the drugs and biological products programs to enhance internal knowledge regarding the use of DHTs in drug development.  This will be an area to watch, as the Agency may issue guidance on topics such as verification and validation of DHTs, technical and performance specifications associated with using a study participant’s own DHTs or general-purpose computing platforms in clinical investigations, how to incorporate upgrades and updates of DHTs in drug development, and incorporating DHTs using AI algorithms into drug development.
  3. Consistency of Evaluations Across Review Divisions.  The DHT Steering Committee will help to facilitate consistent approaches to the review and evaluation of submissions that use a single DHT measurement for studies of different diseases and different drugs.  This program suggests that the Agency is open to the possibility of using a single measurement for multiple purposes, which could increase efficiency and streamline submission processes.
  4. Statistical Considerations in the Analysis of DHT-Derived Data.  FDA plans to address novel considerations for endpoints derived from DHT data and technical data specifications to facilitate submission of readily analyzable DHT-derived data supporting drug development.  Sponsors should watch for Agency recommendations here, as adhering to FDA’s suggestions may facilitate the Agency’s acceptance of DHT-derived data used in submissions.
  5. IT Capabilities.  FDA plans to enhance its IT capabilities to support the review of DHT-generated data, including by establishing cloud technology to review, aggregate, store, and process large volumes of data and implementing standards to reduce the handling necessary to make DHT data analyzable.  Sponsors should consider adhering to any future data standards for DHT-generated datasets as a best practice and ensure that they have systems in place to effectively use and interact with any new cloud technologies.

With respect to external programs and engagement:

  1. FDA Meetings With Sponsors.  The DHT Framework notes how engagements between sponsors and FDA – including the DHT Steering Committee – may occur at different stages of drug development.  If deciding to use DHTs in the drug development process, sponsors should be prepared to discuss the regulatory status of DHTs, development of trial endpoints, selection of DHTs for clinical investigations, and verification and validation of DHTs.
  2. Drug Development Tool Qualification Program.  As noted above and in our previous blog, FDA has qualification programs that are intended to support the development of DHTs for use in assessing medical products.  Sponsors should consider whether to pursue qualification of DHTs as drug development tools for a specific context of use, as the DHT Framework highlights that the a qualified DHT may be relied upon in multiple clinical investigations to support premarket submissions for drugs where the context of use is the same.
  3. Guidance.  FDA plans to issue draft guidances that reflect FDA’s current thinking on DHT topics, including on decentralized clinical trials for drugs, biological products, and devices; and regulatory considerations for prescription drug use-related software.  The DHT Framework notes FDA plans to publish such draft guidances in 2023, and may publish additional draft guidances in identified areas of need informed by stakeholder engagement in 2024.  Sponsors should watch for the publication of these draft guidances and consider whether to provide public comment to the Agency to inform final guidance.
  4. Public Meetings.  FDA plans to host a series of five public meetings or workshops to gather input on issues related to the use of DHTs in regulatory decision-making related to drug and biological product development, including priorities for the development of DHTs to support clinical investigations, approaches to DHT verification and validation, DHT data processing and analysis, regulatory acceptance of safety monitoring tools that use AI-/ML-based algorithms for pharmacovigilance purposes, and emerging challenges.  Sponsors should consider attending these public meetings and workshops to monitor Agency thinking on these key topics and ensure they have a seat at the table.
  5. Demonstration Projects.  The DHT Framework states that FDA will identify at least three demonstration projects to inform methodologies for efficient DHT evaluation in drug development, which may include validation methods for DHTs, endpoint development, analytic approaches to missing data, and use of DHTs in decentralized clinical trials.  These projects may inform policy development, so monitoring the projects and project feedback will be important.
  6. External Organizations.  In addition to collaborating with sponsors and hosting public meetings, FDA also plans to engage with external organizations and participate in forums organized by professional bodies.  Sponsors should monitor FDA participation in such meetings and take note of any positions the Agency takes, as these meetings could inform actions the Agency takes to meet the objectives outlined in the DHT Framework.

Companies in this space should strongly consider submitting comments on the AI Discussion Paper and DHT Framework, as industry feedback could inform the Agency’s thinking on future guidance documents or frameworks on AI/use of DHTs in drug manufacturing going forward.  Indeed, at a February 17, 2023 workshop, FDA officials noted that the questions outlined in the AI Discussion Paper are “very important for [the Agency] as [they] think in terms of providing regulatory clarity for the use of AI in drug development,” with a goal of adopting “a flexible, risk-based regulatory framework that promotes innovation but also protects patient safety.”