On December 23, 2020, the European Commission (the “Commission”) published its inception impact assessment (“Inception Impact Assessment”) of policy options for establishing a European Health Data Space (“EHDS”). The Inception Impact Assessment is open for consultation until February 3, 2021, encouraging “citizens and stakeholders” to “provide views on the Commission’s understanding of the current situation, problem and possible solutions”.
On 18 January 2021, the UK Parliamentary Office of Science and Technology (“POST”)* published its AI and Healthcare Research Briefing about the use of artificial intelligence (“AI”) in the UK healthcare system (the “Briefing”). The Briefing considers the potential impacts of AI on the cost and quality of healthcare, and the challenges posed by the wider adoption of AI, including safety, privacy and health inequalities.
The Briefing summarises the different possible applications of AI in healthcare settings, which raises unique considerations for healthcare providers. It notes that AI, developed through machine learning algorithms, is not yet widely used within the NHS, but some AI products are at various stages of trial and evaluation. The areas of healthcare identified by the Briefing as having the potential for AI to be incorporated include (among others): interpretation of medical imaging, planning patients’ treatment, and patient-facing applications such as voice assistants, smartphone apps and wearable devices.
In addition to releasing the new EU Cybersecurity Strategy before the holidays (see our post here), the Commission published a revised Directive on measures for high common level of cybersecurity across the Union (“NIS2”) and a Directive on the resilience of critical entities (“Critical Entities Resilience Directive”). In this blog post, we summarize key points relating to NIS2, including more onerous security and incident reporting requirements; extending requirements to companies in the food, pharma, medical device, and chemical sectors, among others; and increased powers for regulators, including the ability to impose multi-million Euro fines.
The Commission is seeking feedback on NIS2 and the Critical Entities Resilience Directive, and recently extended its original deadline of early February to March 11, 2021 (responses can be submitted here and here).
On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.
The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.
On January 6, 2021 the UK’s Department of Health and Social Care (“DHSC”) published “A Guide to Good Practice for Digital and Data-Driven Health Technologies” (the “Guidance”). The Guidance updates the DHSC’s “Code of Conduct for Data-Driven Health and Care Technologies” (the “Code”) (for further information on the Code see our earlier blog, here).
As with the Code, the Guidance is a valuable resource to help parties understand what the National Health Service (“NHS”) looks for when acquiring digital and data-driven technologies for use in health and care.
On December 10, 2020, the Office for Civil Rights (“OCR”) of the U.S. Department of Health and Human Services (“HHS”) issued a proposed rule to modify the Standards for the Privacy of Individually Identifiable Health Information (the “Privacy Rule”) promulgated under the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) and the Health Information Technology for Economic and Clinical Health Act of 2009 (“HITECH Act”). According to HHS’s announcement, the proposed rule would amend the Privacy Rule to “support individuals’ engagement in their care, remove barriers to coordinated care, and reduce regulatory burdens on the health care industry.” Public comments on the proposed rule are currently being accepted through February 12, 2021.
The proposed rule is part of HHS’s Regulatory Sprint to Coordinated Care, initiated pursuant to Secretary Alex Azar’s value-based transformation agenda, which seeks to “promote value-based care by examining federal regulations that impede efforts among health care providers and health plans to better coordinate care for patients.” Throughout the Privacy Rule, HHS sought to protect health information while also permitting information sharing for certain beneficial purposes. However, stakeholders have questioned whether the Privacy Rule strikes the appropriate balance in certain situations.
Proposed modifications to the HIPAA Privacy Rule include strengthening individuals’ right to access their protected health information (“PHI”), including electronic PHI; facilitating greater family involvement in care for individuals dealing with health crises or emergencies; and allowing providers more flexibility to disclose PHI when harm to a patient is “serious and reasonably foreseeable,” such as during the opioid crisis or COVID-19 public health emergency. Importantly, multiple provisions of the proposed rule, discussed in greater detail below, address electronic health records (“EHRs”) and personal health applications.
On 25 November 2020, the European Commission published a proposal for a Regulation on European Data Governance (“Data Governance Act”). The proposed Act aims to facilitate data sharing across the EU and between sectors, and is one of the deliverables included in the European Strategy for Data, adopted in February 2020. (See our previous blog here for a summary of the Commission’s European Strategy for Data.) The press release accompanying the proposed Act states that more specific proposals on European data spaces are expected to follow in 2021, and will be complemented by a Data Act to foster business-to-business and business-to-government data sharing.
The proposed Data Governance Act sets out rules relating to the following:
- Conditions for reuse of public sector data that is subject to existing protections, such as commercial confidentiality, intellectual property, or data protection;
- Obligations on “providers of data sharing services,” defined as entities that provide various types of data intermediary services;
- Introduction of the concept of “data altruism” and the possibility for organisations to register as a “Data Altruism Organisation recognised in the Union”; and
- Establishment of a “European Data Innovation Board,” a new formal expert group chaired by the Commission.
On 11 November 2020, the European Data Protection Board (“EDPB”) issued two draft recommendations relating to the rules on how organizations may lawfully transfer personal data from the EU to countries outside the EU (“third countries”). These draft recommendations, which are non-final and open for public consultation until 30 November 2020, follow the EU Court of Justice (“CJEU”) decision in Case C-311/18 (“Schrems II”). (For a more in-depth summary of the CJEU decision, please see our blog post here and our audiocast here. The EDPB also published on 24 July 2020 FAQs on the Schrems II decision here).
The two recommendations adopted by the EDPB are:
- Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data (“Draft Recommendations on Supplementary Measures”); and
- Recommendations 02/2020 on the European Essential Guarantees for surveillance measures (“Recommendations on EEG”).
On September 30, 2020, the Cybersecurity and Infrastructure Security Agency (“CISA”) and the Multi-State Information Sharing and Analysis Center (“MS-ISAC”) released a joint guide synthesizing best practices to prevent and respond to ransomware. This guide was published the day before OFAC and FinCEN released their coordinated guidance on ransomware attacks that we previously summarized here.
Ransomware is malware that encrypts data on a victim’s device, thus rendering the data inaccessible, until a ransom is paid in exchange for decryption. Both the nature and scope of ransomware incidents have become “more destructive and impactful” in recent years. In particular, tactics of malicious actors include threatening to release stolen data or publicly naming victims as part of the extortion. Accordingly, the guide encourages organizations to take proactive efforts to manage risks posed by ransomware and recommends a coordinated response to mitigate its impact.
The guide is divided into two parts. First, the guide focuses on best practices for ransomware prevention, focusing on the common infection vectors—misconfigurations, internet-facing vulnerabilities, phishing, precursor malware infection, third party sources, and managed service providers. For example, threat actors often gain access to an organization’s network through exposed or insecure remote desktop services. Employing best practices for use of remote desktop protocol (“RDP”), closing unused RDP ports on firewalls, and tracking RDP login attempts are few of the recommended risk-mitigating exercises. This part also outlines general best practices for cyber hygiene, including employing multi-factor authentication, implementing the principle of least privilege, and retaining and securing logs. These actions not only mitigate the risk of ransomware but other cybersecurity threats as well.
The second part of the guide focuses on responding to ransomware in three stages.
- Detection and Analysis. Immediate isolation and triage of impacted systems are the priorities. Because threat actors may monitor the organization’s activity or communications following intrusion, the guide recommends using means to avoid the threat actor knowing it has been detected—such as communicating by phone and not email. The guide recommends not paying the ransom, because such payments will not ensure that data is decrypted or that the system is no longer compromised.
- Containment and Eradication. Depending on the ransomware variant, consulting with federal law enforcement or other trusted entities may be worthwhile, as security researchers may have already broken the encryption algorithms or have published information on ransomware binaries and associated registry values. Otherwise, a methodical approach to identifying, containing, and removing any compromise to the system will be critical.
- Recovery and Post-Incident Activity. Documenting the lessons learned from the ransomware will help inform future policies and procedures. Sharing this information can also benefit others in the community.
For organizations seeking further information on ransomware, the guide offers a list of resources from CISA and MS-ISAC. These include regional CISA Cybersecurity Advisors, who advise on best practices to manage cyber risk.
Lawyers who support organizations that face ransomware threats should be familiar with these best practices for ransomware prevention and response, and initiate discussions about how their organizations can best prepare for and meet the threats posed by ransomware. This is increasingly critical at a time when governmental regulators are warning private companies that payments to ransomware actors can implicate legal risks, such as sanctions risk and regulatory obligations under the Bank Secrecy Act.
California Attorney General Xavier Becerra (“AG”) announced in September a settlement against Glow, Inc., resolving allegations that the fertility app had “expose[d] millions of women’s personal and medical information.” In the complaint, the AG alleged violations of certain state consumer protection and privacy laws, stemming from privacy and security “failures” in Glow’s mobile application (the “Glow App”). The settlement, which remains subject to court approval, requires Glow to comply with relevant consumer protection and privacy laws (including California’s medical privacy law), mandates “a first-ever injunctive term that requires Glow to consider how privacy or security lapses may uniquely impact women,” and imposes a $250,000 civil penalty.
According to the AG’s announcement, the “settlement is a wake up call not just for Glow, Inc., but for every app maker that handles sensitive private data.” Below is a discussion of the complaint and settlement, as well as takeaways from the case.
As described in the complaint, the Glow App is “marketed as an ovulation and fertility tracker” and “collects and stores deeply sensitive personal and medical information related to a user’s menstruation, sexual activity, and fertility.” The types of information collected include medications, fertility test results, medical appointments, medical records, and ovulation-cycle calculations, as well as “intimate details of  sexual experiences and efforts to become pregnant.” One feature of the Glow App is its “Partner Connection” offering, which “allows a Glow App user to link to a partner to share information.”
As alleged, Glow violated multiple laws, including California’s Confidentiality of Medical Information Act (“CMIA”). The CMIA regulates, in relevant part, “providers of health care,” that collect and use “medical information,” defined as “individually identifiable information . . . in possession of or derived from a provider of health care, health care service plan, pharmaceutical company, or contractor regarding a patient’s medical history, mental or physical condition, or treatment.” According to the complaint, Glow is a “provider of health care” under CMIA because it “offer[s] software to consumers that is designed to maintain medical information for the purposes of allowing users to manage their information or for the diagnosis, treatment, or management of a medical condition” (citing Cal. Civ. Code 56.06(b)). The complaint also alleges that Glow’s privacy and security practices violated California’s Unfair Competition Law (“UCL”) and False Advertising Law (“FAL”).
The specific activities alleged to have triggered these violations of law from 2013 to 2016 include the following:
- The Partner Connect feature “automatically granted” linking requests and “immediately shared” certain “sensitive information” without obtaining authorization from the Glow user.
- The Partner Connect feature failed to verify the legitimacy of the person with whom the information was being shared.
- The Glow App’s password change functionality asked for “old passwords” without authenticating such passwords on the back-end.
The AG’s settlement with Glow (1) requires Glow to comply with relevant consumer protection and privacy laws, (2) obligates Glow to consider how “privacy or security lapses may uniquely impact women,” and (3) imposes a $250,000 civil penalty. The settlement remains subject to court approval. The requirements of the settlement are discussed in turn.
First, the settlement requires Glow to comply with consumer protection and privacy laws, including the CMIA. To do so, Glow must implement an information security program “to protect the security, integrity, availability, and confidentiality” of “personal information,” “medical information,” and “sensitive personal information” that Glow “collects, stores, processes, uses, transmits, and maintains.” “Personal information” has the meaning it is given under California’s Data Security Law (Cal. Civ. Code. 1798.81.5), and “medical information” has the meaning it is given under CMIA with the clarification that such information may be “enter[ed] or upload[ed] . . . into a mobile application or online service” by a consumer. “Sensitive information” refers to information that is not “medical information” or “personal information” but is individually identifiable information that describes a consumer’s “sexual activity, sexual health, and reproductive health.”
Under the settlement, Glow’s information security program is required to protect the specified categories of information by taking measures such as: (i) preventing unauthorized access, (ii) preventing unauthorized disclosure, (iii) imposing a two-step authentication process for password changes, (iv) providing annual employee training on the information security practices, (v) implementing procedures for vulnerability patching, (vi) incorporating privacy-by-design principles and security-by-design principles when creating new Glow App features, and (vii) establishing a point of contact at Glow to address security issues.
Second, the settlement requires Glow, for two years after implementing its information security program, to complete annual privacy and security risk assessments addressing Glow’s efforts to comply with applicable privacy and security laws. The reports must be submitted to the AG’s office.
Notably, the settlement requires the privacy assessment to “(i) consider online risks that women face, or could face, including gender-based risks, as a result of privacy or security lapses while using GLOW mobile applications or online services; (ii) consider the impact of any such risks, and (iii) document GLOW’s efforts to mitigate any such risks.” As noted, the AG’s announcement of the settlement refers to this requirement as a “first-ever injunctive term” that requires a company to consider the unique impact of privacy and security lapses on women.
Third, the settlement imposes a civil penalty of $250,000.
The settlement highlights the sensitivity of health data, even if that data is not protected under the federal Health Insurance Portability and Accountability Act of 1996 (“HIPAA”). Notably, the AG’s announcement asserts, “[w]hen you meet with your doctor or healthcare provider in person, you know that your sensitive information is protected. It should be no different when you use healthcare apps over the internet.”
The Glow complaint alleges that Glow is a “provider of health care” for the purposes of CMIA because it “offer[s] software to consumers that is designed to maintain medical information for the purposes of allowing its users to manage their information or for the diagnosis, treatment, or management of a medical condition. Specifically, the Glow app is designed for the user to store, email, and print information relating to their reproductive health such as ovulation and menstrual cycles, and/or for the diagnosis, treatment, or management of users seeking to become pregnant or treat infertility.”
The settlement also states that health information may be “medical information” for the purposes of the CMIA “irrespective of how the information is transmitted,” and thus may include information that is “manually enter[ed] or upload[ed] . . . into a mobile application or online service.”
This settlement follows other recent health and medical privacy developments in California. In early September, the California legislatures passed AB 173 creating a new healthcare-related exemption under the California Consumer Privacy Act of 2018 (discussed here). Although the legislature also passed SB 980, the Genetic Information Privacy Act (“GIPA”) (discussed here), Governor Gavin Newsom recently vetoed the bill. GIPA would have imposed certain privacy and security obligations on direct-to-consumer genetic testing companies, and the Governor veto of the bill cited potential implications on research related to COVID-19. Another recent development is the AG’s announcement of a $8.69 million settlement against Anthem Inc., resolving allegations that the health insurer violated state law and HIPAA.