On 11 November 2020, the European Data Protection Board (“EDPB”) issued two draft recommendations relating to the rules on how organizations may lawfully transfer personal data from the EU to countries outside the EU (“third countries”).  These draft recommendations, which are non-final and open for public consultation until 30 November 2020, follow the EU Court of Justice (“CJEU”) decision in Case C-311/18 (“Schrems II”).  (For a more in-depth summary of the CJEU decision, please see our blog post here and our audiocast here. The EDPB also published on 24 July 2020 FAQs on the Schrems II decision here).

The two recommendations adopted by the EDPB are:


Continue Reading EDPB adopts recommendations on international data transfers following Schrems II decision

Public-health researchers, officials and medical professionals rely on data to track outbreaks, advance research, and evaluate prospective treatments. One critical source of patient data comes from electronic health records (EHRs).  EHR data in the U.S. has traditionally been siloed within hospital IT systems, but the federal government and key healthcare stakeholders have recently ramped up

On 13 August 2019, the European Commission opened a call for expression of interest to relaunch the eHealth Stakeholder Group with a view to supporting the “digital transformation of healthcare in the EU”. The eHealth Stakeholder Group was first launched in 2012 and in its first iteration (between 2012 and 2015), contributed to the development

On 6 December 2018, the European Economic and Social Committee (EESC) published an opinion (“Opinion”) addressing the European Commission’s recent Communication on the digital transformation of health and care in the Digital Single Market (issued 25 April 2018).

The EESC is an advisory body of the European Union (“EU”) comprising representatives of workers’ and employers’

On 17 October, the UK Government’s Department of Health and Social Care (DHSC) published a policy paper entitled “The future of healthcare: our vision for digital, data and technology in health and care” (the Policy Paper). The Policy Paper outlines the DHSC’s vision to use technology across the health and

On 8 October, the European Medicines Agency (EMA) published a report (available here) setting out the progress it has made towards applying a common data model (CDM) in Europe. The EMA defines a CDM as “a mechanism by which raw data are standardized to a common structure, format and terminology independently from any particular study in order to allow a combined analysis across several databases/datasets”. The report follows an EMA-hosted workshop in December 2017 to examine the opportunities and challenges of developing a CDM.

The report acknowledges that the use of ‘Real World Data’ (RWD) (data relating to patient health status or delivery of health care data that is routinely collected from sources other than clinical trials) has become an increasingly common source of evidence to support drug development and regulatory decision making for human medical use in Europe. However, Europe currently has no pan-European data network, despite the wealth of data generated through various national healthcare systems that provide access for all. Many multi-database studies currently performed are typically slow and still allow for substantial variability in the conduct of studies. Further, there are a growing number of innovative products that no longer align with customary drug development pathways. This may create uncertainty in their data packages required for authorization, and subsequent tension between facilitating earlier access for patients with limited treatment options against the requirement for proactive robust pharmacovigilance of medicines for wider clinical use across the product life cycle (the existing EMA Patient Registry Initiative addresses this need in part).
Continue Reading EMA publishes “A Common Data Model for Europe? – Why? Which? How?” Workshop Report

Designing data-driven products and services in compliance with privacy requirements can be a challenging process.  Technological innovation enables novel uses of personal data, and companies designing new data-driven products must navigate new, untested, and sometimes unclear requirements of privacy laws, including the General Data Protection Regulation (GDPR).  These challenges are often particularly acute for companies providing products and services leveraging artificial intelligence technologies, or operating with sensitive personal data, such as digital health products and services.

Recognising some of the above challenges, the Information Commissioner’s Office (ICO) has commenced a consultation on establishing a “regulatory sandbox”.  The first stage is a survey to gather market views on how such a regulatory sandbox may work (Survey).  Interested organisations have until 12 October to reply.

The key feature of the regulatory sandbox is to allow companies to test ideas, services and business models without risk of enforcement and in a manner that facilitates greater engagement between industry and the ICO as new products and services are being developed.

The regulatory sandbox model has been deployed in other areas, particularly in the financial services sector (see here), including by the Financial Conduct Authority in the UK (see here).

Potential benefits of the regulatory sandbox include reducing regulatory uncertainty, enabling more products to be brought to market, and reducing the time of doing so, while ensuring appropriate protections are in place (see the FCA’s report on its regulatory sandbox here for the impact it has had on the financial services sector, including lessons learned).

The ICO indicated earlier this year that it intends to launch the regulatory sandbox in 2019 and will focus on AI applications (see here).

Further details on the scope of the Survey are summarised below.


Continue Reading ICO consults on privacy “regulatory sandbox”

On 5 September, in response to the opportunities presented by data-driven innovations, apps, clinician decision support tools, electronic health care records and advances in technology such as artificial intelligence, the UK Government published a draft “Initial code of conduct for data-driven health and care technology” (Code) for consultation.  The Code is designed to be supplementary to the Data Ethics Framework, published by the Department for Digital, Culture, Media and Sport on 30 August, which guides appropriate data use in the public sector.  The Code demonstrates a willingness of the UK Government to support data sharing to take advantage of new technologies to improve outcomes for patients and accelerate medical breakthroughs, while balancing key privacy principles enshrined in the GDPR and emerging issues such as the validation and monitoring of algorithm-based technologies.  For parties considering data-driven digital health projects, the Code provides a framework to help conceptualise a commercial strategy before engaging with legal teams.

The Code contains:

  • a set of ten principles for safe and effective digital innovations; and
  • five commitments from Government to ensure the health and care system is ready and able to adopt new technologies at scale,

each of which are listed further below.

While the full text of the Code will be of interest to all those operating in the digital health space, the following points are of particular note:

  • the UK Government recognises the “immense promise” that data sharing has for improving the NHS and social care system as well as for developing new treatments and medical breakthroughs;
  • the UK Government is committed to the safe use of data to improve outcomes of patients;
  • the Code intends to provide the basis for the health and care system and suppliers of digital technology to enter into commercial terms in which the benefits of the partnerships between technology companies and health and care providers are shared fairly (see further below); and
  • given the need of artificial intelligence for large datasets to function, two key challenges arise: (i) these datasets must be defined and structured in accordance with interoperable standards, and (ii) from an ethical and legal standpoint, people must be able to trust that data is used appropriately, safely and securely as the benefits of data sharing rely upon public confidence in the appropriate and effective use of data.

The Code provides sets out a number of factors consider before engaging with legal teams to help define a commercial strategy for data-driven digital health project.  These factors include: considering the scope of the project, term, value, compliance obligations and responsibilities, IP, liability and risk allocation, transparency, management of potential bias in algorithms, the ability of the NHS to add value, and defining the respective roles of the parties (which will require thinking beyond traditional research collaboration models).

Considering how value is created and realised is a key aspect of any data-driven digital health project, the Code identifies a number of potential options: simple royalties, reduced payments for commercial products, equity shares in business, improved datasets – but there is also no simple of single answer.  Members of Covington’s digital health group have advised on numerous data-driven collaborations in the healthcare sector.  Covington recently advised UK healthcare technology company Sensyne Health plc on pioneering strategic research and data processing agreements with three NHS Trust partners. Financial returns generated by Sensyne Health are shared with its NHS Trust partners via equity ownership in Sensyne Health and a share of royalties (further details are available here).

The UK Government also intends to conduct a formal review of the regulatory framework and assessing the commercial models used in technology partnerships in order to address issues such as bias, transparency, liability and accountability.

The UK Government is currently consulting on the Code (a questionnaire on the Code is available here) and intends to publish a final version of the Code in December.


Continue Reading UK Government publishes “Initial code of conduct for data-driven health and care technology” for consultation

Digital health solution providers, and users of digital health services, should take note of three recently launched EU public consultations in the digital health space, and may wish to make submissions to help shape the future of digital health initiatives in the EU.  The earliest deadline for submissions is 16 August 2017.

EU Commission

The UK Information Commissioner’s Office (“ICO”), which enforces data protection legislation in the UK, has ruled that the NHS Royal Free Foundation Trust (“Royal Free”), which manages a London hospital, failed to comply with the UK Data Protection Act 1998 in providing 1.6 million patient records to Google DeepMind (“DeepMind”), requiring the Royal Free to sign an undertaking committing to changes to ensure it is acting in line with the UK Data Protection Act.

On September 30,  2015, the Royal Free entered into an agreement with Google UK Limited (an affiliate of DeepMind) under which DeepMind would process approximately 1.6 million partial patient records, containing identifiable information on persons who had presented for treatment in the previous five years together with data from the Royal Free’s existing electronic records system.  On November 18, 2015, DeepMind began processing patient records for clinical safety testing of a newly-developed platform to monitor and detect acute kidney injury, formalized into a mobile app called ‘Streams’.
Continue Reading ICO Rules UK Hospital-DeepMind Trial Failed to Comply with UK Data Protection Law