Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor, and Pensions (“HELP”) Committee, published on February 21, 2024, a white paper with various proposals to update privacy protections for health data. In Part 1 of this blog series (see here), we discussed the first section of Senator Cassidy’s February 21, 2024, white paper. Specifically, we summarized Senator Cassidy’s proposals on how to update the existing framework of the Health Insurance Portability and Accountability Act, as amended, and its implementing regulations (collectively, “HIPAA”) without disrupting decades of case law and precedent. In this blog post, we discuss the other sections of the white paper, namely proposals to protect other sources of health data not currently covered by HIPAA.

Continue Reading Senator Cassidy Issues White Paper with Proposals to Update Health Data Privacy Framework – Part 2: Safeguarding Health Data Not Covered by HIPAA 

On February 21, 2024, Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor, and Pensions (“HELP”) Committee, issued a white paper, “Strengthening Health Data Privacy for Americans: Addressing the Challenges of the Modern Era,” which proposes several updates to the privacy protections for health data. This follows Senator Cassidy’s September 2023 request for information from stakeholders about how to enhance health data privacy protections covered by the Health Insurance Portability and Accountability Act (“HIPAA”) framework and to consider privacy protections for other sources of health data not currently covered by HIPAA. The white paper notes that several entities, including trade associations, hospitals, health technology companies, and think tanks, responded to the RFI.

Continue Reading Senator Cassidy Issues White Paper with Proposals to Update Health Data Privacy Framework – Part 1: Updates to the HIPAA Framework

On February 16, 2024, the U.S. Department of Health and Human Services (“HHS”) published a final rule to amend the Confidentiality of Substance Use Disorder (“SUD”) Patient Records regulations (“Part 2”) to more closely align Part 2 with the Health Insurance Portability and Accountability Act of 1996, as amended, and its implementing regulations (collectively, “HIPAA”) as required by Section 3221 of the 2020 Coronavirus Aid, Relief, and Economic Security Act (“CARES Act”).  We previously covered the proposed rule (hereinafter, “the NPRM”), which was issued on December 2, 2022.

The final rule, issued through the Office for Civil Rights (“OCR”) and the Substance Abuse and Mental Health Services Administration (“SAMHSA”), increases alignment between certain Part 2 requirements and HIPAA and it clarifies certain existing Part 2 permissions and restrictions to improve the ability of entities to use and disclose Part 2 records. According to HHS, this final rule will decrease burdens on patients and providers, improve coordination of care and access to care and treatment, and protect the confidentiality of treatment records.

Continue Reading HHS Publishes Final Rule to Align Part 2 and HIPAA

On December 5, 2023, the Spanish presidency of the Council of the EU issued a declaration to strengthen collaboration with Member States and the European Commission to develop a leading quantum technology ecosystem in Europe.

The declaration acknowledges the revolutionary potential of quantum computing, which uses quantum mechanics principles and quantum bits known as “qubits” to solve complex mathematical problems exponentially faster than classical computers.

The declaration was launched with eight Member State signatories (Denmark, Finland, Germany, Greece, Hungary, Italy, Slovenia, and Sweden), and invites other Member States to sign. By doing so, they agree to recognize the “strategic importance of quantum technologies for the scientific and industrial competitiveness of the EU” and commit to collaborating to make Europe the “’quantum valley’ of the world, the leading region globally for quantum excellence and innovation.

Continue Reading Quantum Computing: Action in the EU and Potential Impacts

Earlier today, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.

  • Safety & Security StandardsThe Executive Order sets out several required actions for developers of AI systems.  Notably, the White House, “in accordance with the Defense Production Action,” will require companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government. Relatedly, the Executive Order directs certain federal agencies to undertake the following actions and initiatives:
    • National Institute of Standards and Technology:  establish standards for red-teaming required before the public release of an AI system. 
    • Department of Homeland Security:  apply the NIST standards to use of AI in critical infrastructure sectors and establish an AI Safety and Security Board. 
    • Departments of Energy and Homeland Security:  address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks; it also calls for the creation of standards for biological synthesis screening.
    • Department of Commerce:  develop guidance for content authentication and watermarking to label content generated by AI and received by the government; it also suggests that federal agencies would be required to use these tools.
    • National Security Council & White House Chief of Staff:  develop a National Security Memorandum that ensures that the United States military and intelligence community use AI safely, ethically, and effectively.
  • Privacy.  The Executive Order sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques; strengthening privacy-preserving research and technologies (e.g., cryptographic tools); evaluating how agencies collect and use commercially available information containing personal data (including information procured from data brokers); and developing guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques.
  • Equity and Civil RightsTo address concerns around equity and civil rights, the Executive Order sets forth a number of new requirements for documentation, coordination, and best practices.  Specifically, the Executive Order calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The Executive Order also requires training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations.  Additionally, the Executive Order calls for the creation of best practices on the use of AI in sentencing, parole, probation, and related topics for the criminal justice system.
  • Consumers, Patients, and StudentsRecognizing that AI can have unique impacts on consumers, patients, and students, the Executive Order directs agencies and certain sectors to engage in the following activities: 
    • Healthcare:  To help advance the responsible use of AI in healthcare and the life sciences sector, including for drug development, the Department of Health and Human Services must establish a safety program to receive reports of, and act to remedy, harms or unsafe AI healthcare practices (see our prior post about FDA’s thinking on the use of AI and digital health technologies in drug development here).
    • Education:  The Executive Order calls for the creation of resources to support educators deploying AI-enabled tools.
    • Workers:  After acknowledging that AI offers the promise of improved productivity, the Executive Order requires the development of principles and best practices to mitigate harms and maximize benefits for workers, including by addressing job displacement and guidance for evaluating job applications fairly.  It also requires the production of a report on AI’s potential labor-market impacts.
  • Innovation and Competition.  The Executive Order sets forth a number of priorities to ensure American leadership in AI innovation, such as the creation of a pilot of the National AI Research Resource, a tool that will provide AI researchers and students with key AI resources and data and grant programs.  The Executive Order also discusses updates to the immigration process for individuals with expertise in critical AI areas.  Importantly, the Executive Order directs the Federal Trade Commission to exercise its legal authority to encourage a competitive AI ecosystem.
  • Leadership on International Frameworks.  The Executive Order directs the State Department and Department of Commerce to lead an effort to establish international frameworks governing AI.  These efforts are intended to support and compliment the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.
  • Government Use of AIThe Executive Order recognizes the potential for AI systems to drive better results in the delivery of government services to citizens and directs certain actions to ensure the responsible deployment of AI by government entities.  For example, it requires guidance for agencies’ use of AI, improvements to the process for government acquisition of AI, and the hiring of AI talent by governments.

We are continuing to review and digest the content of the Executive Order and expect to supplement this post with additional observations across our blogs.

On September 27, 2023, Governor Newsom signed AB 254 and AB 352, which both amend the California Confidentiality of Medical Information Act (“CMIA”).  Specifically, AB 254 expands the scope of the CMIA to expressly cover reproductive or sexual health services that are delivered through digital health solutions and the associated health information generated from these services.  AB 352 imposes new requirements on how electronic health record (“EHR”) systems must store medical information related to gender affirming care, abortion and related services, and contraception and the ability of providers of health care, health care service plans, contractors, or employers to disclose such information.

Continue Reading California Enacts Amendments to the CMIA

On September 15, the Federal Trade Commission (“FTC”) and U.S. Department of Health and Human Services (“HHS”) announced an updated joint publication describing the privacy and security laws and rules that impact consumer health data.  Specifically, the “Collecting, Using, or Sharing Consumer Health Information? Look to HIPAA, the FTC Act, and the Health Breach Notification Rule” guidance provides an overview of the Health Insurance Portability and Accountability Act, as amended, and the implementing regulations issued by HHS (collectively “HIPAA”); the FTC Act; and the FTC’s Health Breach Notification Rule (“HBNR”) and how they may apply to businesses.  This joint guidance follows a recent surge of FTC enforcement in the health privacy space.  We offer a high level summary of the requirements flagged by the guidance.

Continue Reading FTC and HHS Announce Updated Health Privacy Publication

On September 6, Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor and Pensions (HELP) Committee, issued a white paper about the oversight and legislative role of Congress related to the deployment of artificial intelligence (AI) in areas under the HELP Committee’s jurisdiction, including health and life sciences.  In the white paper, Senator Cassidy disfavors a one-size-fits-all approach to the regulation of AI and instead calls for a flexible approach that leverages existing frameworks depending on the particular context of use of AI.  “[O]nly if our current frameworks are unable to accommodate . . . AI, should Congress look to create new ones or modernize existing ones.”  The Senator seeks public feedback on the white paper by September 22, 2023.  Health care and life sciences stakeholders should consider providing comments. 

This blog outlines five key takeaways from the white paper from a health care and life sciences perspective. Note that beyond health and life sciences issues, the white paper also addresses considerations for other areas, such as use of AI in educational settings and labor/employment implications created by use of AI.

5 Key Takeaways for AI in Health Care and Life Sciences

The white paper – entitled “Exploring Congress’ Framework for the Future of AI: The Oversight and Legislative Role of Congress Over the Integration of Artificial Intelligence in Health, Education, and Labor” – describes the “enormous good” that AI in health care presents, such as “the potential to help create new cures, improve care, and reduce administrative burdens and overall health care spending.”  At the same time, Senator Cassidy notes that AI presents risks that legal frameworks should seek to minimize.  Five key takeaways from the white paper include:

  1. Senator Cassidy emphasizes that a one-size-fits all approach will not work for AI and effectively grounds many of the broader, ongoing AI policy considerations into the HELP Committee’s core expertise and existing regulatory frameworks for health-related AI.  Many of the emerging frameworks for defining trustworthy or responsible AI and establishing AI risk management practices are drafted broadly for all uses of AI and may not reflect that AI applications in different sectors present unique challenges.  Leveraging the HELP Committee’s expertise around health regulatory frameworks (as well as other sectors within the Committee’s jurisdiction and expertise), the white paper concludes that the context of use greatly affects how policymakers should think about AI’s benefits and risks.  In other words, the white paper recognizes that AI deployed in healthcare settings requires a different regulatory approach compared to AI deployed in educational or employment settings.  Senator Cassidy expresses that a “sweeping, one-size-fits-all approach for regulating AI will not work and will stifle, not foster, innovation.” 
  2. Changes to FDA’s device framework may be on the horizon.  The white paper expresses that Congress should only look to modernize existing frameworks (or create new ones) if current frameworks are unable to accommodate AI.  For example, the white paper acknowledges that the existing framework for preclinical and clinical investigation of new drugs is “generally well-suited to adapt to the use of AI to research and develop new drugs.”  In contrast, Senator Cassidy specifically notes that FDA’s medical device framework was not designed to accommodate AI that may improve over time, signaling potential future action by the HELP Committee to amend relevant provisions of the Federal Food, Drug, and Cosmetic Act (FDCA) to clarify how FDA will treat medical devices that integrate AI. 
  3. There are a variety of applications of AI that will benefit the healthcare and life sciences sector and, ultimately, the public health, including:
    • Pharmaceutical research and development, such as with disease target and drug candidate identification and/or design;
    • Diagnostic and treatment applications, from early disease detection to AI applications intended to help identify and reduce medical errors;
    • Patient- and provider-facing support, including internally-developed clinical decision support (CDS) algorithms and AI interfaces that engage directly with patients;
    • Health care administration and coverage, including claims management, surgical scheduling, generation of replies to patient messages, summarization of patient medical histories, and translation between languages and reading levels for patient materials; and
    • Use of AI to increase the speed and efficiency of FDA’s review processes.
  4. The acknowledgement of these important use cases in the health and life sciences sector leaves open FDA regulatory questions.  For example:
    • As noted above, the white paper is fairly explicit on the point that changes to FDA’s regulatory framework may be required to address AI, but Senator Cassidy leaves open for comment what specific types of changes might need to be made.
    • For AI that does not meet the definition of a medical device (or is subject to enforcement discretion by FDA), Senator Cassidy leaves open for comment how health-related AI should be regulated (e.g., who is responsible for training clinicians before use of certain AI tools described in the white paper, and what standards does such training need to meet).
    • FDA expertise will be critical as AI plays a larger role in health and life sciences, and Senator Cassidy leaves open for comment how Congress should help FDA address these challenges. 
    • Where FDA incorporates AI into its own work, including premarket review processes, the white paper leaves open how sponsors and the public will know what review elements are being performed by AI and whether a unique process will be needed to appeal AI-based decisions within the Agency.
  5. Bias and transparency continue to be front-burner issues.  The discussion of bias and transparency in the white paper confirms that Congress is still focused on how to manage these issues in AI regulation.  The white paper states that AI tools should be developed in a transparent way that provides an understanding about how any given algorithm was designed, but leaves open for comment what specific guidelines and steps should satisfy this need.  The white paper also notes how any framework must build in a “clear method to measure effectiveness” and that Congress may need to consider how to best ensure that AI-enabled products do not give undue weight to potential biases. 

Bonus Takeaway: Healthcare applications for AI may create ambiguities about liability. The white paper states that stakeholders need a clear understanding of potential liability around the use of AI. Specifically, the white paper highlights open questions about liability assignment between the original developer, the most recent developer, clinicians, or others.

Request for Stakeholder Feedback

Recognizing that the “insights of stakeholders that can describe the advantages and drawbacks of AI in our health care system . . . are critical as policy makers grapple with this topic,” Senator Cassidy requests “feedback and comments for ways to improve the framework in which these technologies are developed, reviewed, and used” by Friday, September 22.  Although feedback is not confined to these topics, the white paper poses the following questions for consideration specific to health care:

Supporting Medical Innovation:

  • How can FDA support the use of AI to design and develop new drugs and biologics?
  • What updates to the regulatory frameworks for drugs and biologics should Congress consider to facilitate innovation in AI applications?
  • How can FDA improve the use of AI in medical devices?
  • What updates to the regulatory frameworks for medical devices should Congress consider to facilitate innovation in AI applications while also ensuring that products are safe and effective for patients?
  • How can Congress help FDA ensure that it has access to the expertise required to review products that are developed using AI or that incorporate AI?
  • How can FDA better leverage AI to review product submissions?
  • How can FDA harness external expertise to support review of products that are developed using AI or that incorporate AI?
  • What are the potential consequences of regulating AI in the United States if it remains unregulated in other countries?

Medical Ethics and Protecting Patients:

  • What existing standards are in place to demonstrate clinical validity when leveraging AI?  What gaps exist in those standards?
  • What practices are in place to mitigate bias in AI decision-making?
  • What should be the federal role, if any, in addressing social and/or political bias?
  • How can AI be best adopted to not inappropriately deny patients care?
  • Is the current HIPAA framework equipped to safeguard patient privacy with regards to AI in clinical settings?  If not, how not or how to better equip the framework?
  • What standards are in place to ensure that AI maintains respect and dignity for human life from conception to natural death?
  • Who should be responsible for determining safe and appropriate applications of AI algorithms?
  • Who should be liable for unsafe or inappropriate applications of AI algorithms?  The developer?  A regulating body?  A third party or private entity?

Academic and commercial researchers using patient data are increasingly turning to real world data to gain broader insights into patient care and treatment. The UK is uniquely positioned to unlock the potential research, innovation and social value of health data generated by the NHS, and until recently, there has not been a unified approach to academic and commercial collaborations involving patient data.

The newly released guide to effective NHS data partnerships from NHS England (the “Guidance”), recognises the significant social and economic value that data partnerships can offer, and aims to provide guidance to NHS organisations on establishing data partnerships in order to promote consistency and streamline approaches across the NHS.

The Guidance relates to multiple different types of data partnerships including third-party requests to access NHS data assets for research and development purposes, validation of a solution to a healthcare problem requiring data access, or an NHS organisation reaching out to third parties to use data to solve a particular challenge.

Continue Reading NHS Data Partnerships Guide – unlocking the benefits of data partnerships

Hot on the heels of recent announcements from the U.S. Food and Drug Administration (see our prior blogs here), the European Medicines Agency (“EMA”) has joined the conversation on the use of Artificial Intelligence (“AI”) and Machine Learning (“ML”) technologies in the medicinal product lifecycle.

AI and ML have the potential to enhance every stage of the medicinal product lifecycle, from drug discovery, through to clinical development, manufacturing and post-market pharmacovigilance.  These technologies can display intelligent behaviour and can analyse huge amounts of data.  They are also extremely flexible as they can be trained using data, rather than explicit programming.  When used correctly, AI and ML can “effectively support the acquisition, transformation, analysis, and interpretation of data within the medicinal product lifecycle.”

However, the nature of these technologies also leads to certain risks.  Importantly, there can be a lack of transparency in the models.  Also, the data-driven approach means they can be prone to bias.  The EMA has therefore published a draft “Reflection paper on use of Artificial Intelligence (AI) in medicinal product lifecycle” (the “Draft Reflection Paper”), which is open to consultation until 31 December 2023.  The EMA sees the Draft Reflection Paper as a way to open “a dialogue with developers, academics, and other regulators.” 

What does the Draft Reflection Paper cover?

The Draft Reflection Paper sets out the EMA’s current thinking on the use of AI to “support the safe and effective development, regulation and use of … medicines.”  It applies primarily to human medicines, noting that while similar principles apply to veterinary medicines specific reflections/guidance are needed for the veterinary space. 

The purpose of the Draft Reflection Paper is to identify use of AI/ML that fall within the EMA’s/National Competent Authorities’ remit.  This obviously includes the use of AI in the medicinal product lifecycle but also extends to the use of medical devices with AI/ML technology that are used to generate evidence to support an EU marketing authorisation (i.e., used within the context of clinical trials or combined with the use of a medicinal product). 

Use of AI/ML in the medicines lifecycle

The EMA highlights as a “key principle” that marketing authorisation applicants (“Applicants”) and marketing authorisation holders (“MAH”) will bear responsibility for ensuring AI/ML they use is “fit for purpose and are in line with ethical, technical, scientific, and regulatory standards as described in GxP standards and current EMA scientific guidelines.” 

In summary, the Draft Reflection Paper requires that Applicants take a “risk-based approach for development, deployment and performance monitoring of AI and ML tools.”  The degree of risk will be determined by a number of factors, including: the AI technology itself; the context of use; the degree of influence of the AI/ML technology; and the stage of lifecycle of the medicinal product. 

The Draft Reflection Paper considers use of AI/ML at different stages along the product lifecycle and sets out principles and an indication of risk of applying AI/ML at each such stage:

  • Drug discovery — the EMA acknowledges that the use of AI/ML in drug discovery may be low risk from a regulatory perspective, “as the risk of non-optimal performance often mainly affects the sponsor.”  However, if results contribute to the total body of evidence presented for regulatory review then the regulatory risk increases.
  • Non-clinical development — AI/ML (e.g,AI/ML modelling approaches to replace, reduce, and refine the use of animals”) should follow Good Laboratory Practice (“GLP”), where applicable.  Applicants should consider Application of GLP Principles to Computerised Systems and GLP Data Integrity and their SOPs should cover AI/ML.
  • Clinical trials — AI/ML models (for example, that support selection of patients based on disease characteristics or clinical parameters) must comply with ICH GCP.  The regulatory risk for use of AI/ML increases from early stage to pivotal clinical trials.  Where models are generated for clinical trials, it is likely they will be considered part of the clinical trial data or trial protocol dossier and the models must be made available for regulators to assess at the time of marketing authorisation or clinical trial application. Where data collected/generated with AI/ML may impact the regulatory assessment of a medicine, the EMA recommends early regulatory interaction.
  • Precision medicine — the EMA considers the use of AI/ML in individualizing treatment (e.g., patient selection, dosing, de novo design of product variants) as high-risk from a medicines regulation perspective. The EMA recommends “special care … in defining what constitutes a change inposology (requiring a regulatory evaluation before implementation), to provide guidance that the prescribers can critically apprehend, and include fall-back treatment strategies in cases of technical failure.
  • Product information — AI/ML might be used to draft, compile, translate or review information documents.  Recognizing the risk of hallucinations (which may be plausible but erroneous output) by generative language models, the EMA expects use of such technologies only under “close human supervision.
  • Manufacturing — use of AI/ML in drug manufacturing is expected to increase in the future and the EMA notes that this must comply with relevant quality management principles.
  • Post-authorization phase — AI/ML is likely to have potential to support post-authorization safety and efficacy studies in human medicines, plus pharmacovigilance activities, such as adverse event report management and signal detection.  The MAH must “validate, monitor and document model performance and include AI/ML operations in the pharmacovigilance system, to mitigate risks related to all algorithms and models used.

Considerations for use of AI/ML

The Draft Reflection Paper sets out detailed measures that Applicants can take when using AI/ML technologies.  Some key points include:

  • Interacting with regulators: Applicants should carry out a regulatory impact and risk analysis.  The higher the regulatory impact or risk associated with the use of AI/ML technologies, the sooner the EMA recommends the Applicant engages with regulators to seek scientific advice.
  • Technical considerations:
    • Data acquisition: Applicants should use all efforts and active measures to avoid integration of bias in AI/ML and should document the source of data and the process of acquisition in a traceable manner in line with GxP. 
    • Training, validation and test data: the EMA discusses validation of models, which is importantly different from the concept of validation in the field of medicines.
    • Model development: the EMA encourages development and use of generalizable and robust models.
    • Performance Assessments: the Paper highlights the importance of selecting the correct metrics for performance assessments.
    • Interpretability and explainability: although transparent models are preferred, the EMA states that a “black box” model may be acceptable if developers can substantiate why transparent models are unsatisfactory.  The EMA encourages use of methods within the field of explainable AI wherever possible.
    • Model deployment: a risk-based approach is required for model deployment.
  • Ethical Principles: developers should follow basic ethical principles defined in the guidelines for trustworthy AI and presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI). They should also take a “human-centric” approach to all development and deployment of AI/ML.
  • Governance, Data Protection and Integrity: Applicants and MAHs also need to consider and reflect governance, data protection and integrity principles.

Next Steps

The EMA will finalize the Draft Reflection Paper following the end of the consultation period.  It also intends to provide additional guidance on risk-management and may update existing guidance to take into account the specific issues that AI/ML pose.   

Given that the Draft Reflection Paper puts the onus on Applicants and MAHs to ensure the algorithms, models, datasets etc. they use are compliant, biopharma companies considering the use of AI/ML should watch this space and keep up to date with upcoming developments.