German Government Enacts Digital Care Act

The new Digital Care Act (Digitale-Versorgung-Gesetz) is part of Germany’s efforts to expand the digitization of the healthcare system in Germany. Germany has already been pursuing this path since the so-called ‘E Health Act’ from 2016. The aim of the ‘E-Health Act’ was to establish information and communication technology in healthcare. It focuses in particular on the development of the ‘electronic health card’ and the corresponding ‘electronic patient file’ for statutory health-insured people (see below for more information on such applications), the protection of the data stored in such files against unauthorised use, the creation of a secure ‘telematics infrastructure’, the improvement of the interoperability of healthcare IT systems, and the provision of telemedical services. The ‘telematics infrastructure’ will be an interoperable and compatible information, communication and security infrastructure for the use of the ‘electronic health card’ and the corresponding ‘electronic patient file’, its applications and other electronic applications in healthcare and health research.

The new Digital Care Act builds upon the ‘E-Health Act’ by focusing on the following: medical doctors will not only be allowed to prescribe traditional medicines and treatment methods to their patients, but also health apps. Such health apps may, for example, remind chronically ill people to take their medicine regularly, or provide a diary function where users can note their daily well-being. In the future, German statutory health insurances funds have to reimburse the costs of health apps under certain conditions. Initially, the health app shall be tested for data security, data protection and functionality by the German Federal Institute for Drugs and Medical Devices (‘BfArM’). After the successful test and launch, statutory health insurances will reimburse the costs provisionally for one year. During this period, the manufacturer of the health app must prove to the BfArM that its health app improves patient care. The reimbursement amount will be negotiated with the German Association of Health Insurance Funds (GKV-Spitzenverband).

Continue Reading

ICO’s Call for Input on Bias and Discrimination in AI systems

On June 25, 2019, as part of their continuing work on the AI Auditing Framework, the UK Information Commissioner’s Office (ICO) published a blog setting out their views on human bias and discrimination in AI systems. The ICO has also called for input on specific questions relating to human bias and discrimination, set out below.

The ICO explains in its blog how flaws in training data can result in algorithms that perpetuate or magnify unfair biases. The ICO identifies three broad approaches to mitigate this risk in machine learning models:

  1. Anti-classification: making sure that algorithms do not make judgments based on protected characteristics such as sex, race or age, or on proxies for protected characteristics (e.g., occupation or post code);
  2. Outcome and error parity: comparing how the model treats different groups. Outcome parity means all groups should have equal numbers of positive and negative outcomes. Error parity means all groups should have equal numbers of errors (such as false positives or negatives). A model is fair if it achieves outcome parity and error parity across members of different protected groups.
  3. Equal calibration: comparing the model’s estimate of the likelihood of an event and the actual frequency of said event for different groups. A model is fair if it is equally calibrated between members of different protected groups.

The guidance stresses the importance of appropriate governance measures to manage the risks of discrimination in AI systems. Organizations may take different approaches depending on the purpose of the algorithm, but they should document the approach adopted from start to finish. The ICO also recommends that organizations adopt clear, effective policies and practices for collecting representative training data to reduce discrimination risk; that organizations’ governing bodies should be involved in approving anti-discrimination approaches; and that organizations continually monitor algorithms by testing them regularly to identify unfair biases. Organizations should also consider using a diverse team when implementing AI systems, which can provide additional perspectives that may help to spot areas of potential discrimination.

The ICO seeks input from industry stakeholders on two questions:

  • If your organisation is already applying measures to detect and prevent discrimination in AI, what measures are you using or have you considered using?
  • In some cases, if an organisation wishes to test the performance of their ML model on different protected groups, it may need access to test data containing labels for protected characteristics. In these cases, what are the best practices for balancing non-discrimination and privacy requirements?

The ICO also continues to seek input from industry on the development of an auditing framework for AI; organizations should contact the ICO if they wish to provide feedback.

UK Government’s Guide to Using AI in the Public Sector

On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”).  The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.

The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems.  Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.

The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.

Continue Reading

Senators Introduce Legislation to Regulate Privacy and Security of Wearable Health Devices and Genetic Testing Kits

Last week, Senators Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced the Protecting Personal Health Data Act (S. 1842), which would provide new privacy and security rules from the Department of Health and Human Services (“HHS”) for technologies that collect personal health data, such as wearable fitness trackers, social-media sites focused on health data or conditions, and direct-to-consumer genetic testing services, among other technologies.  Specifically, the legislation would direct the HHS Secretary to issue regulations relating to the privacy and security of health-related consumer devices, services, applications, and software. These new regulations will also cover a new category of personal health data that is otherwise not protected health information under HIPAA.

The Protecting Personal Data Health Act is particularly notable for three reasons.  First, this bill would incorporate consumer rights concepts from the EU General Data Protection Regulation (“GDPR”), such as an individual’s right to delete and amend her health data, as well as a right to access a copy of personal health data, at the U.S. federal level.  Second, the bill does not contemplate situations where entities are required to retain personal health data under other regulations (though the bill includes an exception for entities covered under the Health Insurance Portability and Accountability Act). Third, the bill requires that HHS establish a national health task force to provide reports to Congress, and at the same time, this bill specifies that any other federal agency guidance or published resources to help protect personal health data must be consistent with HHS Secretary’s rules under this bill, to the degree practicable, which may reflect an expansion of HHS’s authority to set rules and standards for health data previously regulated by other federal agencies (such as the Federal Trade Commission (“FTC”)).

The bill would require HHS, in consultation with the FTC and other relevant stakeholders, to promulgate regulations that “strengthen privacy and security protections for consumers’ personal health data” collected, processed, analyzed, or used by  health-related consumer devices, services, applications, and software.

The HHS regulations must address:

  • differences in the nature and sensitivity of data collected or stored by different devices, applications, services, and software;
  • the “appropriate uniform standards for consent” for handling of genetic, biometric, and personal health data as well as appropriate exceptions;
  • minimum security standards;
  • the appropriate standard for de-identification of personal health data, and
  • limits on collection, use, and disclosure of data to those “directly relevant and necessary to accomplish a specific purpose.”

In addition, the bill would require the new HHS regulations to provide individuals with the right to delete and amend their personal health data, to the extent practicable.  It also directs HHS to consider developing standards for obtaining user consent to data sharing.

The Act would also create a National Task Force on Health Data Protection to study health data.  The Task Force would be required to:

  • evaluate the long-term effectiveness of de-identification techniques for genetic and biometric data;
  • evaluate the development of security standards, including encryption standards and transfer protocols;
  • offer input for cybersecurity and privacy risks of devices;
  • provide advice for the dissemination of resources to educate consumers about genetics and direct-to-consumer genetic testing, and
  • submit a report to Congress no later than one year after the bill’s enactment.

A companion bill has not yet been introduced in the House of Representatives.  California is also considering a bill that would expand California’s health privacy law to include any information in possession of or derived from a digital health feedback system, which is broadly defined to include sensors, devices, and internet platforms connected to those sensors or devices that receive information about an individual.

FDA Announces Public Workshop on Using Randomized Clinical Trials to Generate Real-World Evidence

Today, FDA published a notice in the Federal Register announcing a public workshop on July 11-12 entitled “Leveraging Randomized Clinical Trials to Generate Real-World Evidence for Regulatory Purposes.”  This workshop builds on FDA’s ongoing efforts to implement the Real-World Evidence (RWE) Framework published in December 2018, which we previously discussed in this post.

According to the Federal Register notice, the public workshop is being convened by Duke University’s Robert J. Margolis, MD, Center for Health Policy (Duke Margolis) and supported by a cooperative agreement with FDA.  The workshop will cover considerations for using randomized clinical trial designs and real-world data (RWD) to generate RWE, including the following –

  • the selection of interventions appropriate in clinical care settings;
  • study design elements and study populations;
  • capturing outcomes in clinical care settings;
  • addressing potential challenges around blinding, randomization, and bias; and
  • regulatory considerations for randomized clinical trials using RWD (e.g., safety and product monitoring and maintaining data integrity).

This public workshop will follow on the heels of a meeting held by the Department of Health and Human Services (HHS) to “seek public input and comment on opportunities to leverage departmental resources, increase collaboration, and to partner with private stakeholders in the service of accelerating the process for clinical innovation in the United States.”  The HHS meeting also was announced recently in the Federal Register and will take place on June 20-21.

ICO’s Interim Report on Explaining AI

On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organisations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR. Continue Reading

HHS Updates Maximum Annual Penalty Limits for Some HIPAA Violations

On April 30, 2019, the Department of Health and Human Services (HHS) published in the Federal Register a notification of enforcement discretion indicating that it will lower the annual Civil Money Penalty (CMP) limits for three of the four penalty tiers in the Health Information Technology for Economic and Clinical Health Act (HITECH Act).  The HITECH Act categorizes violations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) in four tiers based on the violators’ level of culpability for the violation: the person did not know (and, by exercising reasonable diligence, would not have known) that the person violated the provision (Tier 1); the violation was due to reasonable cause, and not willful neglect (Tier 2); the violation was due to willful neglect that is timely corrected (Tier 3); and the violation was due to willful neglect that is not timely corrected (Tier 4).

The maximum penalty per violation for all four tiers was previously $1.5 million.  HHS’s new policy states that the annual penalty limit for Tier 1 violations has now been decreased from $1.5 million to $25,000.  The new annual penalty limits for Tier 2 and 3 violations are now $100,000 and $250,000, respectively. The penalty limit for Tier 4 violations will remain at $1.5 million.

When HHS reviewed the penalty limits in 2003 in response to public comments, the agency indicated that setting the same annual limit for all four violation tiers “reflect[ed] the most logical reading of the HITECH Act, which provides the Secretary with discretion to impose penalties for each category of culpability up to the maximum amount described in the highest penalty tier.”  In explaining the agency’s recent policy change, HHS said that “[u]pon further review of the statute by the HHS Office of the General Counsel, HHS has determined that the better reading of the HITECH Act is to apply annual limits [in accordance with the new policy].”

HHS will penalize  HIPAA violations in accordance with the new tier limits until further notice. The agency plans to engage in rulemaking to revise the tiers in the regulation.  This new maximum penalty structure provides even more incentive for covered entities and business associates to implement robust HIPAA policies and procedures, monitor and audit to detect noncompliance, and correct any instances of noncompliance promptly.

HHS Clarifies HIPAA Liability for EHR System Developers that Transfer Data to Health Apps

On Friday, April 19, 2019, the Office for Civil Rights of the U.S. Department of Health and Human Services (HHS) explained in an FAQ the circumstances under which electronic health record (EHR) systems may be subject to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) liability for an app’s impermissible use or disclosure of electronic protected health information (ePHI). As long as the app is independent of the covered entity and its EHR system and is instead controlled by the individual patient, the covered entity and its EHR system have no HIPAA liability once ePHI is delivered to the app at the patient’s request.

In its FAQ, HHS specified that if, at the request of a patient, a HIPAA covered entity’s EHR system transfers ePHI to an app that is not developed by or specifically provided to the covered entity by the EHR system, neither the covered entity nor the EHR system developer would face HIPAA liability for the app’s subsequent impermissible use or disclosure of the information. But if an EHR system transfers patient data from a covered entity to an app that the EHR system provides “through, or on behalf of, the covered entity (directly or through another business associate)” and either owns the app or has a business relationship with the app developer, the EHR system developer may be subject to HIPAA liability for subsequent impermissible use or disclosure of the ePHI.

This attempt to clarify the boundaries of HIPAA liability will likely be welcomed by a wide range of covered entities, EHR systems, and developers of apps that process ePHI, including apps that connect patients with doctors, pharmacy apps, and apps that focus on fertility, mental health, smoking cessation, and more. Patients, on the other hand, should be aware that the information being collected by an app (which can be substantial and sensitive, depending on the nature of the app) has no protection under HIPAA unless the app was offered to them by a covered entity as part of its overall EHR system.

Update: HHS Extends Comment Period for Interoperability Proposed Rules

On April 19, 2019, the Department of Health and Human Services (HHS) announced a 30-day extension to the comment period for two rules proposed by the HHS Centers for Medicare & Medicaid Services (CMS) and the Office of the National Coordinator for Health Information Technology (ONC), originally published on March 4, 2019. These rules, discussed in greater detail in our March 7, 2019 blog post, seek to increase interoperability and patient access to electronic health information (EHI). The new deadline for comments on both rules is June 3, 2019.

HHS stated that the decision to extend the comment period came after receiving requests from various stakeholders for additional response time. HHS acknowledged that the proposed rules “include a range of issues having major effects on healthcare,” and explained that the extension will allow for more meaningful input from the public.

In addition to the deadline extension, HHS also announced ONC’s release of a second draft of the Trusted Exchange Framework and Common Agreement (TEFCA). The TEFCA is a proposed set of common principles intended to help develop trust between health information networks (HINs) to make HINs more able to widely share EHI across disparate networks. Ultimately, by establishing a Common Agreement based on these “trust” principles, ONC aims to facilitate access to EHI for patients, providers, payors and other stakeholders. Comments on the second draft of the TEFCA are due June 17, 2019.

FDA Outlines Proposed Framework for Regulating Artificial Intelligence Software

On April 2, 2019, FDA released a discussion paper entitled “Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” (the “AI Framework”). The AI Framework is the Agency’s first policy document describing a potential regulatory approach for medical devices that use artificial intelligence (“AI”) and machine learning (“ML”). The AI Framework does not establish new requirements or an official policy, but rather was released by FDA to seek early input prior to the development of a draft guidance. FDA acknowledges that the approach “may require additional statutory authority to implement fully.”

In an accompanying press release, former FDA Commissioner Scott Gottlieb outlined the need for a “more tailored” regulatory paradigm for algorithms that learn and adapt in the real world. FDA’s medical device regulation scheme was not designed for dynamic machine learning algorithms, as the Agency traditionally encounters products that are static at the time of FDA review. The AI Framework is FDA’s attempt to develop “an appropriate framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet [FDA’s] gold standard for safety and effectiveness throughout the product’s lifecycle.” Continue Reading

LexBlog