FDA Announces Public Workshop on Using Randomized Clinical Trials to Generate Real-World Evidence

Today, FDA published a notice in the Federal Register announcing a public workshop on July 11-12 entitled “Leveraging Randomized Clinical Trials to Generate Real-World Evidence for Regulatory Purposes.”  This workshop builds on FDA’s ongoing efforts to implement the Real-World Evidence (RWE) Framework published in December 2018, which we previously discussed in this post.

According to the Federal Register notice, the public workshop is being convened by Duke University’s Robert J. Margolis, MD, Center for Health Policy (Duke Margolis) and supported by a cooperative agreement with FDA.  The workshop will cover considerations for using randomized clinical trial designs and real-world data (RWD) to generate RWE, including the following –

  • the selection of interventions appropriate in clinical care settings;
  • study design elements and study populations;
  • capturing outcomes in clinical care settings;
  • addressing potential challenges around blinding, randomization, and bias; and
  • regulatory considerations for randomized clinical trials using RWD (e.g., safety and product monitoring and maintaining data integrity).

This public workshop will follow on the heels of a meeting held by the Department of Health and Human Services (HHS) to “seek public input and comment on opportunities to leverage departmental resources, increase collaboration, and to partner with private stakeholders in the service of accelerating the process for clinical innovation in the United States.”  The HHS meeting also was announced recently in the Federal Register and will take place on June 20-21.

ICO’s Interim Report on Explaining AI

On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organisations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR. Continue Reading

HHS Updates Maximum Annual Penalty Limits for Some HIPAA Violations

On April 30, 2019, the Department of Health and Human Services (HHS) published in the Federal Register a notification of enforcement discretion indicating that it will lower the annual Civil Money Penalty (CMP) limits for three of the four penalty tiers in the Health Information Technology for Economic and Clinical Health Act (HITECH Act).  The HITECH Act categorizes violations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) in four tiers based on the violators’ level of culpability for the violation: the person did not know (and, by exercising reasonable diligence, would not have known) that the person violated the provision (Tier 1); the violation was due to reasonable cause, and not willful neglect (Tier 2); the violation was due to willful neglect that is timely corrected (Tier 3); and the violation was due to willful neglect that is not timely corrected (Tier 4).

The maximum penalty per violation for all four tiers was previously $1.5 million.  HHS’s new policy states that the annual penalty limit for Tier 1 violations has now been decreased from $1.5 million to $25,000.  The new annual penalty limits for Tier 2 and 3 violations are now $100,000 and $250,000, respectively. The penalty limit for Tier 4 violations will remain at $1.5 million.

When HHS reviewed the penalty limits in 2003 in response to public comments, the agency indicated that setting the same annual limit for all four violation tiers “reflect[ed] the most logical reading of the HITECH Act, which provides the Secretary with discretion to impose penalties for each category of culpability up to the maximum amount described in the highest penalty tier.”  In explaining the agency’s recent policy change, HHS said that “[u]pon further review of the statute by the HHS Office of the General Counsel, HHS has determined that the better reading of the HITECH Act is to apply annual limits [in accordance with the new policy].”

HHS will penalize  HIPAA violations in accordance with the new tier limits until further notice. The agency plans to engage in rulemaking to revise the tiers in the regulation.  This new maximum penalty structure provides even more incentive for covered entities and business associates to implement robust HIPAA policies and procedures, monitor and audit to detect noncompliance, and correct any instances of noncompliance promptly.

HHS Clarifies HIPAA Liability for EHR System Developers that Transfer Data to Health Apps

On Friday, April 19, 2019, the Office for Civil Rights of the U.S. Department of Health and Human Services (HHS) explained in an FAQ the circumstances under which electronic health record (EHR) systems may be subject to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) liability for an app’s impermissible use or disclosure of electronic protected health information (ePHI). As long as the app is independent of the covered entity and its EHR system and is instead controlled by the individual patient, the covered entity and its EHR system have no HIPAA liability once ePHI is delivered to the app at the patient’s request.

In its FAQ, HHS specified that if, at the request of a patient, a HIPAA covered entity’s EHR system transfers ePHI to an app that is not developed by or specifically provided to the covered entity by the EHR system, neither the covered entity nor the EHR system developer would face HIPAA liability for the app’s subsequent impermissible use or disclosure of the information. But if an EHR system transfers patient data from a covered entity to an app that the EHR system provides “through, or on behalf of, the covered entity (directly or through another business associate)” and either owns the app or has a business relationship with the app developer, the EHR system developer may be subject to HIPAA liability for subsequent impermissible use or disclosure of the ePHI.

This attempt to clarify the boundaries of HIPAA liability will likely be welcomed by a wide range of covered entities, EHR systems, and developers of apps that process ePHI, including apps that connect patients with doctors, pharmacy apps, and apps that focus on fertility, mental health, smoking cessation, and more. Patients, on the other hand, should be aware that the information being collected by an app (which can be substantial and sensitive, depending on the nature of the app) has no protection under HIPAA unless the app was offered to them by a covered entity as part of its overall EHR system.

Update: HHS Extends Comment Period for Interoperability Proposed Rules

On April 19, 2019, the Department of Health and Human Services (HHS) announced a 30-day extension to the comment period for two rules proposed by the HHS Centers for Medicare & Medicaid Services (CMS) and the Office of the National Coordinator for Health Information Technology (ONC), originally published on March 4, 2019. These rules, discussed in greater detail in our March 7, 2019 blog post, seek to increase interoperability and patient access to electronic health information (EHI). The new deadline for comments on both rules is June 3, 2019.

HHS stated that the decision to extend the comment period came after receiving requests from various stakeholders for additional response time. HHS acknowledged that the proposed rules “include a range of issues having major effects on healthcare,” and explained that the extension will allow for more meaningful input from the public.

In addition to the deadline extension, HHS also announced ONC’s release of a second draft of the Trusted Exchange Framework and Common Agreement (TEFCA). The TEFCA is a proposed set of common principles intended to help develop trust between health information networks (HINs) to make HINs more able to widely share EHI across disparate networks. Ultimately, by establishing a Common Agreement based on these “trust” principles, ONC aims to facilitate access to EHI for patients, providers, payors and other stakeholders. Comments on the second draft of the TEFCA are due June 17, 2019.

FDA Outlines Proposed Framework for Regulating Artificial Intelligence Software

On April 2, 2019, FDA released a discussion paper entitled “Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” (the “AI Framework”). The AI Framework is the Agency’s first policy document describing a potential regulatory approach for medical devices that use artificial intelligence (“AI”) and machine learning (“ML”). The AI Framework does not establish new requirements or an official policy, but rather was released by FDA to seek early input prior to the development of a draft guidance. FDA acknowledges that the approach “may require additional statutory authority to implement fully.”

In an accompanying press release, former FDA Commissioner Scott Gottlieb outlined the need for a “more tailored” regulatory paradigm for algorithms that learn and adapt in the real world. FDA’s medical device regulation scheme was not designed for dynamic machine learning algorithms, as the Agency traditionally encounters products that are static at the time of FDA review. The AI Framework is FDA’s attempt to develop “an appropriate framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet [FDA’s] gold standard for safety and effectiveness throughout the product’s lifecycle.” Continue Reading

EU High-Level Working Group Publishes Ethics Guidelines for Trustworthy AI

On 8 April 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase.

Continue Reading

Council of Europe issues recommendation on health-related data

On March 28, 2019, the Council of Europe* issued a new Recommendation on the protection of health-related data.  The Recommendation calls on all Council of Europe member states to take steps to ensure that the principles for processing health-related data (in both the public and private sector) set out in the Appendix of the Recommendation are reflected in their law and practice.

This Recommendation is likely to be of interest to both public sector and private sector organizations that are seeking to use health-related data in innovative ways, including developing digital health solutions that involve genetic data, scientific research, data sharing or mobile health applications.

The Recommendation builds on Convention 108, which is an international treaty first ratified in 1981 and the first legally binding international instrument on protecting individuals’ privacy.  The Convention 108 has recently been updated to be aligned to the GDPR (see the text of the consolidated text of the modernized Convention 108+), but contains less granular obligations than the GDPR.  The Recommendation complements the modernized Convention 108+ by introducing specific definitions (such as “health-related data” and “genetic data”) and specific principles for processing health data.

Most of the principles on processing health data set out in the Recommendation reiterate the position under the EU General Data Protection Regulation (“GDPR”) and relevant guidance issued by European data protection authorities and the European Data Protection Board (the “EDPB”, previously known as the “Article 29 Working Party”).  The Recommendation does, however, provide some specific guidance on processing health-related data that is more detailed than, and in some aspects, goes beyond, the requirements of the GDPR, as described below:

  • Genetic data. The Recommendation provides that genetic data should only be collected subject to appropriate safeguards where it is either prescribed by law, or on the basis of consent (except where such consent is excluded by law).  Genetic data used for preventative health care, diagnosis or treatment of patients or scientific research should only be used for those purposes, or to enable the individuals concerned by the results of the genetic tests to take an informed decision on these matters.  Genetic data used in the employment context, for insurance purposes and for judicial procedures or investigations are specifically called out as areas requiring further consideration by member states on laws to provide appropriate safeguards.
  • Sharing health-related data for secondary purposes.  In relation to sharing health-related data for purposes other than providing and administering health care, the Recommendation states that only recipients who are authorized by law should have access to health-related data, with no mention of patients’ consent as a way of legitimizing such access. This position is potentially more restrictive than the current approach under the GDPR, where third parties not involved in providing health care to patients (such as research or academic institutions or commercial companies) may receive health-related data as long as they do so in compliance with the GDPR.  Whether national laws implementing this Recommendation will provide that third parties lawfully receiving health-related data in compliance with the GDPR (such as with patients’ consent) will be considered to meet this “authorization” requirement remains to be seen.  The Recommendation also states that recipients of health-related data must be subject to the rules of confidentiality incumbent upon a healthcare professional (or equivalent) unless other safeguards are provided by law.
  • Scientific research.  The Recommendation takes a contextual approach to scientific research, providing that the need to process health-related data for scientific research should be weighed against the risks to the data subject (and to their biological family if genetic data is involved). Unlike the GDPR, the Recommendation does not automatically qualify scientific research as being compatible with the original purposes for which the data was collected.  As a general principle, health-related data should only be processed for research purposes where the data subject has consented, unless the law provides that health-related data can be processed without consent.  Individuals should also be provided transparent and comprehensible information about the research project.  The Recommendation adds that conditions in which health-related data are processed for scientific research must be assessed, where necessary, by the competent independent body, such as an ethics committee, and such research projects should be subject to safeguards set out in law.  Fundamentally, the three-part requirements of consent/law, notice and safeguards for using health-related data for research is the same as under the GDPR  However, in some respects Recommendations appear to call for a strengthened regime for scientific research using health-related data that goes further than the GDPR.
  • Digital health. Several principles in the Recommendation are clearly relevant for digital health applications, particularly those involving artificial intelligence, machine learning and mobile devices.  The Recommendation provides that systems storing health-related data should be “auditable”, meaning that it should be possible to trace any access to, modification of, and actions carried out on the information system, so that the author can be identified.  The Recommendation also encourages the adoption of “reference frameworks”, which are coordinated set of rules and state-of-the-art processes adapted to practice and applicable to health information systems, covering areas of interoperability and security, which should apply to information systems hosting or processing health-related data.  The Recommendations also specifically mentions professionals who are not directly involved in providing individual patient health care, but may have access to health-related data to provide “smooth operation of information systems” (such as cloud systems?).  Such professionals must have full regard for professional secrecy and comply with security requirements laid down by law to guarantee the confidentiality and security of the data.  In relation to mobile devices, the Recommendation makes it clear that information collected on mobile devices can constitute health-related data and therefore should have the same legal protections as other health-related data processing.
  • Individuals’ rights. The Recommendation provides that individuals should have the right to be informed and exercise control over their health-related data and genetic data, in line with the GDPR.  However, three areas of deviation are: (1) individuals should have the right not to be informed of medical diagnoses or the results of genetic tests, as they may have their own reasons for not wishing to know, subject to limited exceptions where they must be informed by law; (2) when individuals withdraw from a scientific research project, individuals should be informed that their health-related data processed in the context of that research will be destroyed or anonymized in a manner which does not compromise the scientific validity of the research – which appear to be more nuanced than recent guidance form the EDPB; and (3) individuals should have the right to be informed the reasoning that underlies data processing involving health-related data where the results of such processing are applied to them, particularly if profiling is involved.  This second right is similar to the one in the GDPR (Article 15(1)(h)) but applies more broadly to include processing other than those that fall within solely automated decision-making with significant effects (as described in Article 22 of the GDPR).

To the extent that the GDPR does not already impose the same obligations as in the principles of the Recommendation, the Recommendation is not binding on any private sector or public sector organizations.  The member states of the Council of Europe or the European Union, however, are expected to use the Recommendation as guidance when adopting national laws that deal with health data.  These principles also provide some insight into how European data protection authorities are likely to interpret the provisions in the GDPR that apply to health-related data and genetic data, and the direction of future guidance and legislation on the topic.

* The Council of Europe is an international organization, which is distinct from the European Union, founded in 1949 to promote democracy and protect human rights and the rule of law in Europe.  The Council of Europe consists of 47 member states, which includes all of the 28 EU Member States.  Recommendations issued by the Council of Europe are not binding until the EU or national governments of Member States implement legislation, but EU laws often build on Council of Europe standards when drawing up legislation.

ICO opens beta phase of privacy “regulatory sandbox”

On 29 March 2019, the ICO opened the beta phase of the “regulatory sandbox” scheme (the “Sandbox”), which is a new service designed to support organizations that are developing innovative and beneficial projects that use personal data.  The application process for participating in the Sandbox is now open, and applications must be submitted to the ICO by noon on Friday 24 May 2019. The ICO has published on its website a Guide to the Sandbox, which explains the scheme in detail.

The purpose of the Sandbox is to support organizations that are developing innovative products and services using personal data and develop a shared understanding of what compliance looks like in particular innovative areas.  Organizations participating in the Sandbox are likely to benefit from having the opportunity to liaise directly with the regulator on innovative projects with complex data protection issues. The Sandbox will also be an opportunity for market leaders in innovative technologies to influence the ICO’s approach to certain use cases with challenging aspects of data protection compliance or where there is uncertainty about what compliance looks like.

The beta phase of the Sandbox is planned to run from July 2019 to September 2020.  Around 10 organizations from private, public and third sectors will be selected to participate. In the beta phase, the ICO is focusing on data processing that falls within the remit of UK data protection law.

In particular, the ICO is seeking applications for products or services that address the following data protection challenges relevant to innovation:

  • use of personal data in emerging or developing technology such as biometrics, internet of things (IoT), facial recognition, wearable tech, cloud-based products;
  • complex data sharing at any and all levels;
  • building good user experience and public trust by ensuring transparency, clarity and explainability of data use;
  • perceived limitations, or lack of understanding of the General Data Protection Regulation and Data Protection Act 2018 provisions on automated decision making, big data, machine learning or AI;
  • utilising existing data (often at scale and in linking data) for new purposes or for longer retention periods;
  • building ‘data protection by design and default’ into product development, taking account of cost issues and difficulties of doing this until testing has been undertaken; or
  • ensuring the security of data and identifying data breaches in complex and innovative environments.

Participating organizations will be asked to sign terms and conditions with the ICO, and will also receive a statement of ‘comfort from enforcement’. This statement will state that the ICO will not take immediate enforcement action for any inadvertent breach of data protection law as a result of product or service development during the Sandbox.

The ICO will work with participating organizations to design a bespoke plan, and provide informal advice or ‘steers’ on the project.  Participating organizations can also request ‘statements of regulatory comfort’ from the ICO when they exit the Sandbox, in which the ICO will state that on the basis of the information provided whilst in the Sandbox, the ICO did not encounter any indication that the product or service would infringe data protection law.

The ICO conducted a consultation on the Sandbox in September 2018 (see our previous blog post here), and the analysis of the results of the consultation was published in November 2018.  Information about how to apply to the Sandbox can be found here.

UK’s NICE releases newly updated Digital Health Technologies (DHT) Evidence Standards Framework

Following on from the Evidence Standards Framework for DHTs published in December 2018 (the Original Standards, as reported in our previous blog post, here), the UK’s National Institute for Health and Care Excellence (NICE) recently published a newly updated version of the standards (the Updated Standards, available here).

The Updated Standards were produced following feedback received by NICE concerning the Original Standards.  While the spirit of the latest standards is largely the same, the Updated Standards have sought to build out the aims and context for the document, including a new accompanying “User Guide” (which supersedes the previous “FAQs”).  The User Guide covers the background of the standards, their development and use, and future priorities identified by stakeholders, along with a glossary of key terms, such as the Updated Standards’ interpretation of ‘Artificial Intelligence’, and ‘Real-world Data’. With respect to the economic impact section of the Updated Standards, a new ‘Basic’ level has been added for ‘low impact’ DHTs undergoing local service evaluation where a budget impact analysis rather than economic analysis would be appropriate.

NICE has also provided supporting case studies to help demonstrate how DHTs are functionally classified (here), and assessed for effectiveness and economic impact (here), respectively, under the Updated Standards.

LexBlog