Top 10 Questions During Ideation of Digital Health Solutions

Our clients increasingly apply agile product and business development methodologies when they are developing digital health solutions.  “Ideation” is the part of that process and involves the rapid identification and creation of ideas for digital health solutions, which are then prototyped and tested.  Covington has created a Top 10 Questions for Ideation of Digital Health Solutions that can help lawyers contribute to the digital health ideation process.

In today’s video post we discuss intended use of the digital health solution and how lawyers can play a key role in discussing this topic.  Over the next nine weeks, we will post a video explaining each of our 10 questions.

Commission relaunch of eHealth Stakeholder Group

On 13 August 2019, the European Commission opened a call for expression of interest to relaunch the eHealth Stakeholder Group with a view to supporting the “digital transformation of healthcare in the EU”. The eHealth Stakeholder Group was first launched in 2012 and in its first iteration (between 2012 and 2015), contributed to the development of the Digital Agenda for Europe on eHealth and the eHealth Action Plan. In 2016, the Commission relaunched the Stakeholder Group, and between 2016 and 2018, the group assisted with the Digital Single Market Strategy and the eHealth Action Plan 2012-2020.

The Commission is now seeking representatives of European umbrella organisations active in the eHealth sector to relaunch the stakeholder group for a term of three years. Selected stakeholders will be expected to provide advice and expertise contributing to policy development in particular in relation to the following areas:

  • Health Data.
  • Digital health services.
  • Health data protection and privacy issues.
  • Cybersecurity for health and care data.
  • Digital tools for citizen empowerment and person centred care.
  • Artificial intelligence and health.
  • Other cross cutting aspects linked to the digital transformation of health and care, such as financing and investment proposals and enabling technologies.

The group will also engage with, and seek input from representatives and organisations across society including academics, healthcare professionals, patient groups and the tech industry sector.

The call is open until 27 September 2019 and the selections criteria can be viewed on the Commission’s website here.

ICO publishes blog post on AI and trade-offs between data protection principles

On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”).  The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions.  The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.

The ICO invites organizations with experience of considering these complex issues to provide their views.  This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI.  See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.

Continue Reading

ICO Launches Public Consultation on New Data Sharing Code of Practice

On July 16, 2019, the UK’s Information Commissioner’s Office (“ICO”) released a new draft Data sharing code of practice (“draft Code”), which provides practical guidance for organizations on how to share personal data in a manner that complies with data protection laws.  The draft Code focuses on the sharing of personal data between controllers, with a section referring to other ICO guidance on engaging processors.  The draft Code reiterates a number of legal requirements from the GDPR and DPA, while also including good practice recommendations to encourage compliance. The draft Code is currently open for public consultation until September 9, 2019, and once finalized, it will replace the existing Data sharing code of practice (“existing Code”).

Continue Reading

French medicines regulator produces first in Europe medical devices cybersecurity guidelines

France’s medicines regulator, the Agence Nationale de Sécurité du Médicament et des Produits de Santé (ANSM), has released draft guidelines, currently subject to a public consultation, setting out recommendations for manufacturers designed to help prevent cybersecurity attacks to medical devices. Notably, the draft guidelines are the first instance of recommendations released by a national regulator in Europe that apply cybersecurity considerations specifically to medical devices. The full ANSM draft guidelines, ‘Cybersécurité des dispositifs médicaux intégrant du logiciel au cours de leur cycle de vie’ (‘Cybersecurity of medical devices integrating software during their life cycle’) published 19 July 2019, is available in French here, and in English here.

The draft guidelines note that while the European regulatory framework (the Medical Devices Regulation 2017/745 and In Vitro Diagnostic Medical Devices Regulation 2017 /746) has been modified “in line with technological developments” (e.g. “data exchange, monitoring, risk prediction and control software”) to include software within the definition of a medical device, and accompanying security and performance requirements specific to such medical devices incorporating software, the “[medical device and in vitro diagnostic medical device r]egulations do not explicitly refer to or elaborate on the notion of cybersecurity”. For the purposes of the guidelines, ‘cybersecurity’ is described as “the full set of technical or organisational measures set up to ensure the integrity and availability of a [medical device] and the confidentiality of the information held on or output by this [medical device] against the risk of targeted attacks.

In overview, the draft guidelines require manufacturers to undertake risk assessments using both IT and medical device risk management methodology, and then align these approaches as part of manufacturers’ implementation of quality management systems. The recommendations are subdivided into areas representing different parts of the product life cycle, including: software design activity; initialization (first use); monitoring (post market management); and medical device software end of life.

The draft guidelines also make reference to the French ‘General Security Framework’ from which “the criteria of availability, integrity and confidentiality are the baseline objectives to fulfil in terms of security” and that “various documents and tools provided by the ANSSI [the French National Security Agency] are also applicable to [medical devices].” Further, the draft guidelines introduce a criterion of ‘auditability’ to be additionally addressed by medical device manufacturers.

ANSM has shared its work within this area with the European Commission in the hope that “the [European] regulations evolve to integrate [ANSM’s work]” as it is the first time that such recommendations have been drafted in the EU. The draft guidelines are currently subject to public consultation until 30 September 2019.

German Government Enacts Digital Care Act

The new Digital Care Act (Digitale-Versorgung-Gesetz) is part of Germany’s efforts to expand the digitization of the healthcare system in Germany. Germany has already been pursuing this path since the so-called ‘E Health Act’ from 2016. The aim of the ‘E-Health Act’ was to establish information and communication technology in healthcare. It focuses in particular on the development of the ‘electronic health card’ and the corresponding ‘electronic patient file’ for statutory health-insured people (see below for more information on such applications), the protection of the data stored in such files against unauthorised use, the creation of a secure ‘telematics infrastructure’, the improvement of the interoperability of healthcare IT systems, and the provision of telemedical services. The ‘telematics infrastructure’ will be an interoperable and compatible information, communication and security infrastructure for the use of the ‘electronic health card’ and the corresponding ‘electronic patient file’, its applications and other electronic applications in healthcare and health research.

The new Digital Care Act builds upon the ‘E-Health Act’ by focusing on the following: medical doctors will not only be allowed to prescribe traditional medicines and treatment methods to their patients, but also health apps. Such health apps may, for example, remind chronically ill people to take their medicine regularly, or provide a diary function where users can note their daily well-being. In the future, German statutory health insurances funds have to reimburse the costs of health apps under certain conditions. Initially, the health app shall be tested for data security, data protection and functionality by the German Federal Institute for Drugs and Medical Devices (‘BfArM’). After the successful test and launch, statutory health insurances will reimburse the costs provisionally for one year. During this period, the manufacturer of the health app must prove to the BfArM that its health app improves patient care. The reimbursement amount will be negotiated with the German Association of Health Insurance Funds (GKV-Spitzenverband).

Continue Reading

ICO’s Call for Input on Bias and Discrimination in AI systems

On June 25, 2019, as part of their continuing work on the AI Auditing Framework, the UK Information Commissioner’s Office (ICO) published a blog setting out their views on human bias and discrimination in AI systems. The ICO has also called for input on specific questions relating to human bias and discrimination, set out below.

The ICO explains in its blog how flaws in training data can result in algorithms that perpetuate or magnify unfair biases. The ICO identifies three broad approaches to mitigate this risk in machine learning models:

  1. Anti-classification: making sure that algorithms do not make judgments based on protected characteristics such as sex, race or age, or on proxies for protected characteristics (e.g., occupation or post code);
  2. Outcome and error parity: comparing how the model treats different groups. Outcome parity means all groups should have equal numbers of positive and negative outcomes. Error parity means all groups should have equal numbers of errors (such as false positives or negatives). A model is fair if it achieves outcome parity and error parity across members of different protected groups.
  3. Equal calibration: comparing the model’s estimate of the likelihood of an event and the actual frequency of said event for different groups. A model is fair if it is equally calibrated between members of different protected groups.

The guidance stresses the importance of appropriate governance measures to manage the risks of discrimination in AI systems. Organizations may take different approaches depending on the purpose of the algorithm, but they should document the approach adopted from start to finish. The ICO also recommends that organizations adopt clear, effective policies and practices for collecting representative training data to reduce discrimination risk; that organizations’ governing bodies should be involved in approving anti-discrimination approaches; and that organizations continually monitor algorithms by testing them regularly to identify unfair biases. Organizations should also consider using a diverse team when implementing AI systems, which can provide additional perspectives that may help to spot areas of potential discrimination.

The ICO seeks input from industry stakeholders on two questions:

  • If your organisation is already applying measures to detect and prevent discrimination in AI, what measures are you using or have you considered using?
  • In some cases, if an organisation wishes to test the performance of their ML model on different protected groups, it may need access to test data containing labels for protected characteristics. In these cases, what are the best practices for balancing non-discrimination and privacy requirements?

The ICO also continues to seek input from industry on the development of an auditing framework for AI; organizations should contact the ICO if they wish to provide feedback.

UK Government’s Guide to Using AI in the Public Sector

On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”).  The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.

The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems.  Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.

The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.

Continue Reading

Senators Introduce Legislation to Regulate Privacy and Security of Wearable Health Devices and Genetic Testing Kits

Last week, Senators Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced the Protecting Personal Health Data Act (S. 1842), which would provide new privacy and security rules from the Department of Health and Human Services (“HHS”) for technologies that collect personal health data, such as wearable fitness trackers, social-media sites focused on health data or conditions, and direct-to-consumer genetic testing services, among other technologies.  Specifically, the legislation would direct the HHS Secretary to issue regulations relating to the privacy and security of health-related consumer devices, services, applications, and software. These new regulations will also cover a new category of personal health data that is otherwise not protected health information under HIPAA.

The Protecting Personal Data Health Act is particularly notable for three reasons.  First, this bill would incorporate consumer rights concepts from the EU General Data Protection Regulation (“GDPR”), such as an individual’s right to delete and amend her health data, as well as a right to access a copy of personal health data, at the U.S. federal level.  Second, the bill does not contemplate situations where entities are required to retain personal health data under other regulations (though the bill includes an exception for entities covered under the Health Insurance Portability and Accountability Act). Third, the bill requires that HHS establish a national health task force to provide reports to Congress, and at the same time, this bill specifies that any other federal agency guidance or published resources to help protect personal health data must be consistent with HHS Secretary’s rules under this bill, to the degree practicable, which may reflect an expansion of HHS’s authority to set rules and standards for health data previously regulated by other federal agencies (such as the Federal Trade Commission (“FTC”)).

The bill would require HHS, in consultation with the FTC and other relevant stakeholders, to promulgate regulations that “strengthen privacy and security protections for consumers’ personal health data” collected, processed, analyzed, or used by  health-related consumer devices, services, applications, and software.

The HHS regulations must address:

  • differences in the nature and sensitivity of data collected or stored by different devices, applications, services, and software;
  • the “appropriate uniform standards for consent” for handling of genetic, biometric, and personal health data as well as appropriate exceptions;
  • minimum security standards;
  • the appropriate standard for de-identification of personal health data, and
  • limits on collection, use, and disclosure of data to those “directly relevant and necessary to accomplish a specific purpose.”

In addition, the bill would require the new HHS regulations to provide individuals with the right to delete and amend their personal health data, to the extent practicable.  It also directs HHS to consider developing standards for obtaining user consent to data sharing.

The Act would also create a National Task Force on Health Data Protection to study health data.  The Task Force would be required to:

  • evaluate the long-term effectiveness of de-identification techniques for genetic and biometric data;
  • evaluate the development of security standards, including encryption standards and transfer protocols;
  • offer input for cybersecurity and privacy risks of devices;
  • provide advice for the dissemination of resources to educate consumers about genetics and direct-to-consumer genetic testing, and
  • submit a report to Congress no later than one year after the bill’s enactment.

A companion bill has not yet been introduced in the House of Representatives.  California is also considering a bill that would expand California’s health privacy law to include any information in possession of or derived from a digital health feedback system, which is broadly defined to include sensors, devices, and internet platforms connected to those sensors or devices that receive information about an individual.

FDA Announces Public Workshop on Using Randomized Clinical Trials to Generate Real-World Evidence

Today, FDA published a notice in the Federal Register announcing a public workshop on July 11-12 entitled “Leveraging Randomized Clinical Trials to Generate Real-World Evidence for Regulatory Purposes.”  This workshop builds on FDA’s ongoing efforts to implement the Real-World Evidence (RWE) Framework published in December 2018, which we previously discussed in this post.

According to the Federal Register notice, the public workshop is being convened by Duke University’s Robert J. Margolis, MD, Center for Health Policy (Duke Margolis) and supported by a cooperative agreement with FDA.  The workshop will cover considerations for using randomized clinical trial designs and real-world data (RWD) to generate RWE, including the following –

  • the selection of interventions appropriate in clinical care settings;
  • study design elements and study populations;
  • capturing outcomes in clinical care settings;
  • addressing potential challenges around blinding, randomization, and bias; and
  • regulatory considerations for randomized clinical trials using RWD (e.g., safety and product monitoring and maintaining data integrity).

This public workshop will follow on the heels of a meeting held by the Department of Health and Human Services (HHS) to “seek public input and comment on opportunities to leverage departmental resources, increase collaboration, and to partner with private stakeholders in the service of accelerating the process for clinical innovation in the United States.”  The HHS meeting also was announced recently in the Federal Register and will take place on June 20-21.

LexBlog