On January 14, 2021, the United States Court of Appeals for the Fifth Circuit vacated a $4.3 million civil monetary penalty that the Office for Civil Rights (“OCR”) of the Department of Health and Human Services (“HHS”) imposed against the University of Texas M.D. Anderson Cancer Center (“M.D. Anderson”).  OCR ordered the penalty in 2017 following an investigation into three data breaches suffered by M.D. Anderson in 2012 and 2013, finding that M.D. Anderson had violated the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) and the Health Information and Technology for Economic and Clinical Health Act of 2009 (“HITECH Act”).  The Court, however, held that the penalty was “arbitrary, capricious, and otherwise unlawful,” in part based on its interpretation of the HIPAA Rules. Continue Reading M.D. Anderson Wins Appeal Over $4.3 Million HIPAA Penalty

On January 6, 2021, the UK’s AI Council (an independent government advisory body) published its AI Roadmap (“Roadmap”). In addition to calling for a  Public Interest Data Bill to ‘protect against automation and collective harms’, the Roadmap acknowledges the need to counteract public suspicion of AI and makes 16 recommendations, based on three main pillars, to guide the UK Government’s AI strategy.

Continue Reading AI Update: The Future of AI Policy in the UK

On January 5, 2021, an amendment to the Health Information Technology for Economic and Clinical Health (“HITECH”) Act was signed into law.  The amendment requires the U.S. Department of Health and Human Services (“HHS”) to “consider certain recognized security practices of covered entities and business associates when making certain determinations” regarding fines, audit results, or other remedies for resolving potential violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”).  For organizations subject to HIPAA, the amendment provides substantial incentives to establish or improve their cybersecurity programs.  While it does not establish a complete safe harbor from HIPAA enforcement, the amendment does offer organizations a chance to mitigate financial penalties and other negative regulatory actions that may result from a data breach. Continue Reading HITECH Amendment Provides Some Protection For Covered Entities and Business Associates that Adopt Recognized Security Standards

On December 23, 2020, the European Commission (the “Commission”) published its inception impact assessment (“Inception Impact Assessment”) of policy options for establishing a European Health Data Space (“EHDS”).  The Inception Impact Assessment is open for consultation until February 3, 2021, encouraging “citizens and stakeholders” to “provide views on the Commission’s understanding of the current situation, problem and possible solutions”.

Continue Reading European Commission Conducts Open Consultation on the European Health Data Space Initiative

On 18 January 2021, the UK Parliamentary Office of Science and Technology (“POST”)* published its AI and Healthcare Research Briefing about the use of artificial intelligence (“AI”) in the UK healthcare system (the “Briefing”).  The Briefing considers the potential impacts of AI on the cost and quality of healthcare, and the challenges posed by the wider adoption of AI, including safety, privacy and health inequalities.

The Briefing summarises the different possible applications of AI in healthcare settings, which raises unique considerations for healthcare providers.  It notes that AI, developed through machine learning algorithms, is not yet widely used within the NHS, but some AI products are at various stages of trial and evaluation.  The areas of healthcare identified by the Briefing as having the potential for AI to be incorporated include (among others): interpretation of medical imaging, planning patients’ treatment, and patient-facing applications such as voice assistants, smartphone apps and wearable devices.

Continue Reading AI Update: UK Parliament Research Briefing on AI in the UK Healthcare System

FDA has long recognized the significant potential of artificial intelligence- and machine learning- (AI/ML-) based software as a medical device (SaMD) to transform health care as well as the unique challenges presented by AI/ML-based software under the Agency’s traditional medical device regulatory framework.  On January 12, 2021, FDA issued the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan describing the Agency’s multi-pronged approach to facilitating innovation in AI/ML-based SaMD while advancing the Agency’s regulatory oversight.  The Action Plan builds on and responds to stakeholder feedback on the Agency’s April 2019 proposed regulatory framework for FDA review of postmarket modifications to AI/ML-based devices that continually evolve based on real-world learning and adaptation.  Some key action items outlined in the Action Plan and associated takeaways are:

  1. FDA will update and issue guidance for a proposed framework for postmarket changes to AI/ML-based software.  The Agency’s April 2019 proposed framework described an approach that relied on a pre-determined change control plan that included SaMD Pre-Specifications (SPS) (i.e., what aspects the manufacturer intends to change through learning) and the Algorithm Change Protocol (ACP) (i.e., how the algorithm will learn and change while remaining safe and effective on the market).  While the industry generally supported the framework, many details related to its implementation remained open.  In the Action Plan, FDA intends to issue draft guidance in 2021 about what should be included in an SPS and ACP to support the safety and effectiveness of AI/ML-based SaMD.  This draft guidance will be an important step in the practical implementation of the framework proposed in April 2019 and facilitating FDA review and marketing of AI/ML-based SaMD that continually evolves on the market after FDA clearance or approval.
  2. Harmonize good machine learning practices (GMLP).  The April 2019 draft framework also relied on the concept of GMLP, i.e., well-established best practices and standards for AI/ML-based software including practices related to data management, feature extraction, training, interpretability, evaluation, and documentation.  Adherence to GMLP is therefore important not only for the development of quality AI/ML-based software but also to facilitating FDA regulatory oversight.  While numerous groups are engaged in efforts to develop and define GMLP, including several with FDA participation, FDA outlined the need to have a well-established standard for GMLP, which in turn would inform software developers as they implement GMLP that will satisfy FDA.  In the Action Plan, FDA recognizes that it needs to take a more active role in developing standardized GMLPs and is committing to deepening its work with the communities engaged in GMLP development to encourage consensus outcomes.
  3. Support transparency and trust through patient-centered approaches.  AI/ML-based software raises unique considerations regarding transparency, accountability, equity and trust, which FDA describes as necessitating a proactive patient-centered approach to device development.  In particular, stakeholders have raised challenges regarding transparency and how AI/ML-based devices are described to users in device labeling.  As part of the Action Plan, FDA proposes to host a public workshop to elicit input on patient-centered approaches in device labeling to support transparency and inform the Agency’s thinking on the types of information that should be included in the labeling for AI/ML-based SaMD.
  4. Support development of methodologies to identify and eliminate bias and promote robustness.  While bias and generalizability is a challenge across medical device development and clinical trials, these challenges are heightened for AI/ML-based systems given their potential for mirroring biases present in the datasets used for development and training as well as the opacity of the algorithms.  In recognition of these challenges and the need for SaMD to be suited for racially and ethnically diverse patients, FDA plans to support numerous regulatory research efforts for the identification and evaluation of bias.  Concerns related to bias may receive additional attention in the Biden-Harris Administration given an overarching priority of addressing health disparities.  The Administration has shown dedication to addressing racial disparities, especially in health care and created a COVID-19 Racial and Ethnic Disparities Task Force to confront the racial and ethnic disparities of the pandemic.  Moreover, the Chair of the Senate HELP Committee with jurisdiction over FDA, Senator Patty Murray (D-WA), released a report in September 2020 which outlined inequalities in the American health care system and provided recommendations on Congressional action to effectively address them.
  5. Enhance clarity of real-world performance monitoring processes.  The April 2019 proposed framework contemplated a total product lifecycle approach to the oversight of AI/ML-based SaMD including collection and monitoring of real-world data to support software modifications.  While stakeholders supported a total lifestyle approach and the role of real-world performance monitoring, many questions remained open as to how this approach should be implemented.  As part of the Action Plan, FDA will support piloting of real-world performance monitoring by working with stakeholders on a voluntary basis through coordination with other ongoing FDA programs focused on the use of real-world data.  The goal of the pilots is to develop frameworks for gathering and utilizing real world performance metrics as well as thresholds and performance evaluations for the metrics.

In addition to releasing the new EU Cybersecurity Strategy before the holidays (see our post here), the Commission published a revised Directive on measures for high common level of cybersecurity across the Union (“NIS2”) and a Directive on the resilience of critical entities (“Critical Entities Resilience Directive”). In this blog post, we summarize key points relating to NIS2, including more onerous security and incident reporting requirements; extending requirements to companies in the food, pharma, medical device, and chemical sectors, among others; and increased powers for regulators, including the ability to impose multi-million Euro fines.

The Commission is seeking feedback on NIS2 and the Critical Entities Resilience Directive, and recently extended its original deadline of early February to March 11, 2021 (responses can be submitted here and here).

Continue Reading Proposed New EU Cyber Rules Introduce More Onerous Requirements and Extend to More Sectors

On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.

Continue Reading The Council of Europe Publishes Feasibility Study on Developing a Legal Instrument for Ethical AI

On January 6, 2021 the UK’s Department of Health and Social Care (“DHSC”)  published “A Guide to Good Practice for Digital and Data-Driven Health Technologies” (the “Guidance”).  The Guidance updates the DHSC’s “Code of Conduct for Data-Driven Health and Care Technologies” (the “Code”) (for further information on the Code see our earlier blog, here).

As with the Code, the Guidance is a valuable resource to help parties understand what the National Health Service (“NHS”) looks for when acquiring digital and data-driven technologies for use in health and care.

Continue Reading UK’s Department of Health and Social Care Publishes Updated Guidance on Good Practice for Digital and Data-Driven Health Technologies

On December  10, 2020, the Office for Civil Rights (“OCR”) of the U.S. Department of Health and Human Services (“HHS”) issued a proposed rule to modify the Standards for the Privacy of Individually Identifiable Health Information (the “Privacy Rule”) promulgated under the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) and the Health Information Technology for Economic and Clinical Health Act of 2009 (“HITECH Act”).  According to HHS’s announcement, the proposed rule would amend the Privacy Rule to “support individuals’ engagement in their care, remove barriers to coordinated care, and reduce regulatory burdens on the health care industry.”  Public comments on the proposed rule are currently being accepted through February 12, 2021.

The proposed rule is part of HHS’s Regulatory Sprint to Coordinated Care, initiated pursuant to Secretary Alex Azar’s value-based transformation agenda, which seeks to “promote value-based care by examining federal regulations that impede efforts among health care providers and health plans to better coordinate care for patients.”  Throughout the Privacy Rule, HHS sought to protect health information while also permitting information sharing for certain beneficial purposes.  However, stakeholders have questioned whether the Privacy Rule strikes the appropriate balance in certain situations.

Proposed modifications to the HIPAA Privacy Rule include strengthening individuals’ right to access their protected health information (“PHI”), including electronic PHI; facilitating greater family involvement in care for individuals dealing with health crises or emergencies; and allowing providers more flexibility to disclose PHI when harm to a patient is “serious and reasonably foreseeable,” such as during the opioid crisis or COVID-19 public health emergency.  Importantly, multiple provisions of the proposed rule, discussed in greater detail below, address electronic health records (“EHRs”) and personal health applications.

Continue Reading HHS Announces Proposed Changes to HIPAA’s Privacy Rule