On February 9, 2021, the UK Government’s Department for Health and Social Care (“DHSC”) announced a review into the efficient and safe use of health data for research and analysis for the benefit of patients in the health sector (“Review”). The DHSC encourages stakeholder feedback in the context of the Review, and will be of particular interest to organisations that have, or seek to have, access to NHS patient data for research purposes.
The Federal Trade Commission (“FTC”) announced this month a proposed settlement against Flo Health, Inc. (“Flo”), the developer of popular menstrual cycle and fertility-tracking application (the “Flo App”), resolving allegations that “the company shared the health information of users with outside data analytics providers after promising that such information would be kept private.” The proposed settlement requires Flo, among other things, to obtain review by an “independent third-party professional” of its privacy practices, obtain users’ consent before sharing their health information, alert users whose data was disclosed, and require third-parties that previously received that data to destroy it.
This settlement marks the first instance in which the FTC has required a company to provide users with a notice of the privacy action brought by the FTC. Specifically, in its proposed settlement, the FTC requires Flo to “clearly and conspicuously” share with users a pre-written notice that explains what information Flo disclosed to third parties and describes the settlement with the FTC. According to the FTC’s announcement, the agency is “looking closely at whether developers of health apps are keeping their promises and handling sensitive health information responsibly.” Below is a discussion of the complaint and settlement, as well as takeaways from the case.
As described in the complaint, the Flo App “functions as an ovulation calendar, period tracker, and pregnancy guide.” The Flo App collects personal information from its users, including “name, email address, date of birth, place of residence, dates of menstrual cycles, when pregnancies started and ended, menstrual and pregnancy-related symptoms, weight, and temperature.” According to the complaint, Flo “repeatedly promised to protect [users’] information and keep it secret” in its privacy policies. The complaint further alleged that Flo disclosed health information to various third parties not in accordance with its policies. This health information included, without limitation, unique advertising identifiers, users’ intentions to become pregnant, and when users were menstruating. These allegations follow a 2019 report that conducted an analysis of a number of apps’ data sharing activity. Upon publication of the report, Flo App users’ complained to Flo about its privacy practices.
The FTC complaint alleged that Flo violated Section 5 of the FTC Act because its privacy practices amounted to “unfair or deceptive acts or practice, in or affecting commerce.” The complaint also alleged that Flo violated the EU-U.S. Privacy Shield and the U.S.-Swiss Privacy Shield, which require notice, choice, and protection of personal data transferred to third parties.
The specific activities amounting to “unfair or deceptive acts or practices” alleged in the complaint include the following:
- Flo represented to users that the Flo App would not disclose, without consumers’ consent, their health information to third parties, but did in fact disclose such information to third parties without consent.
- Flo represented to users that it would disclose only non-personally identifiable information “like device identifiers” to third parties, but in fact also shared users’ identifiable health information.
- Flo represented to users that third parties could not use users’ personal information “for any other purpose except to provide services in connection with” the Flo App, but in fact set no limitations on how third parties could use such information. At least one third party used the Flo App users’ information for its own research and development purposes.
- Flo certified to users that it adheres to the principles of the Privacy Shield frameworks, but in fact did not. Specifically, Flo did not adhere to the principles of notice (i.e., Flo did not properly inform users of disclosures of their information to and why), choice (i.e., Flo did not offer individuals the opportunity to choose whether their personal information could be used for purposes other than the ones originally stated), accountability for onward transfers (i.e., Flo did not ensure that third parties only process user data for limited and specified purposes), and data integrity (i.e., Flo did not process personal information in a way that was compatible with the purposes for which it has been collected).
The Proposed Settlement
The FTC’s settlement with Flo requires, among other things, that the company obtain an independent review of its privacy practices and get Flo App users’ consent before sharing their health information. The specifics of key provisions of the settlement are discussed in turn.
First, Flo must not misrepresent the purposes for which it or relevant third parties collect, maintain, use, or disclose users’ data; how much consumers can control these data uses; its compliance with any privacy, security, or compliance program; and how it collects, maintains, uses, discloses, deletes, or protects users’ personal information.
Second, Flo must instruct any third party with which it has shared health information to destroy such information.
Third, Flo must “clearly and conspicuously” share with its Flo App users the “Notice” attached to the settlement, which informs consumers that Flo shared user’s health information with third parties and reached a settlement with the FTC. The Notice must be posted on Flo’s website and emailed to Flo App users.
The FTC published a description of the proposed settlement agreement in the Federal Register; the proposed settlement will be subject to public comment through March 1. After the comment period closes, the FTC will decide whether to finalize the proposed consent order.
Consumer health apps, in particular menstrual and ovulation trackers, are under growing scrutiny by federal and state regulators. In September 2020, California Attorney General Xavier Becerra, who is now President-elect Joe Biden’s pick for Secretary of the Department of Health and Human Services, announced a $250,000 settlement with Glow, Inc., resolving allegations that the fertility app had “expose[d] millions of women’s personal and medical information” and violated multiple laws, including California’s Confidentiality of Medical Information Act (“CMIA”). FTC’s settlement with Flo further highlights the sensitivity of health data, even if that data is not protected under the federal Health Insurance Portability and Accountability Act of 1996 (“HIPAA”). While traditional health privacy frameworks, such as HIPAA, do not typically apply to digital health apps, the FTC’s Section 5 authority offers a path to regulation. Other states may pursue enforcement against digital health apps using their own consumer protection laws.
FTC voted 5-0 in favor of the Flo settlement, but the two Democratic commissioners—Rohit Chopra (recently tapped by President Joe Biden to direct the Consumer Financial Protection Bureau) and Rebecca Kelly Slaughter (now Acting FTC Chair)—partially dissented. The two signaled their disappointment that the Commission did not go further in protecting consumers’ privacy, and argued that Flo’s conduct violated the federal Health Breach Notification Rule. This rule requires vendors of personal health records, which may include certain mobile health apps, to notify users and the FTC if there has been an unauthorized disclosure. “Although the FTC has advised mobile health apps to examine their obligations under the rule,” the dissenters wrote, “the FTC has never brought an action to enforce it.” The dissenters also called on Congress to provide more authority in the privacy space. Commissioner Noah Joshua Phillips issued a separate statement, disagreeing with the dissenter’s view on the Health Breach Notification Rule. He argued that requiring companies to issue notice every time an unauthorized disclosure occurs, particularly when there is no remedial action for consumers to take, “runs the risk of undermining consumer trust and needlessly overwhelming consumers.”
This settlement marks the first time the FTC has ordered a company to issue a specific notice to consumers about a privacy action brought against it. The FTC typically requires consumer notice in cases where consumers’ health or safety is at risk, where consumers would not be able to discover or determine the illegal behavior on their own, or where consumers have a financial or legal interest that needs to be protected. Here, the Flo action, including the requirement to issue the Notice specifically referencing the FTC’s privacy action against it, is another signal that regulators are increasingly focused on safeguarding the sensitive information that digital health apps collect and use.
On January 14, 2021, the United States Court of Appeals for the Fifth Circuit vacated a $4.3 million civil monetary penalty that the Office for Civil Rights (“OCR”) of the Department of Health and Human Services (“HHS”) imposed against the University of Texas M.D. Anderson Cancer Center (“M.D. Anderson”). OCR ordered the penalty in 2017 following an investigation into three data breaches suffered by M.D. Anderson in 2012 and 2013, finding that M.D. Anderson had violated the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) and the Health Information and Technology for Economic and Clinical Health Act of 2009 (“HITECH Act”). The Court, however, held that the penalty was “arbitrary, capricious, and otherwise unlawful,” in part based on its interpretation of the HIPAA Rules.
The Court held that the HIPAA Security Rule does not mandate “bulletproof protection” of electronic protected health information (“ePHI”). Instead, the Court found that M.D. Anderson had adopted sufficient “mechanisms” to encrypt ePHI. It also held that the “passive loss of information” did not contravene certain of HIPAA’s restrictions on the disclosure of ePHI. Finally, the Court concluded that HHS’s penalty exceeded the statutory maximum and was inconsistent with other penalties imposed in similar situations. The Fifth Circuit’s decision may provide support for covered entities that seek to challenge penalties associated with HIPAA violations in the future and may invite HHS to consider revisions to its HIPAA enforcement regulations.
On January 6, 2021, the UK’s AI Council (an independent government advisory body) published its AI Roadmap (“Roadmap”). In addition to calling for a Public Interest Data Bill to ‘protect against automation and collective harms’, the Roadmap acknowledges the need to counteract public suspicion of AI and makes 16 recommendations, based on three main pillars, to guide the UK Government’s AI strategy.
On January 5, 2021, an amendment to the Health Information Technology for Economic and Clinical Health (“HITECH”) Act was signed into law. The amendment requires the U.S. Department of Health and Human Services (“HHS”) to “consider certain recognized security practices of covered entities and business associates when making certain determinations” regarding fines, audit results, or other remedies for resolving potential violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”). For organizations subject to HIPAA, the amendment provides substantial incentives to establish or improve their cybersecurity programs. While it does not establish a complete safe harbor from HIPAA enforcement, the amendment does offer organizations a chance to mitigate financial penalties and other negative regulatory actions that may result from a data breach.
Specifically, the amendment instructs HHS to consider “whether the covered entity or business associate has adequately demonstrated that it had, for not less than the previous 12 months, recognized security practices in place that may”:
(1) mitigate fines imposed under section 1176 of the Social Security Act (as previously amended by the HITECH Act);
(2) result in an early and favorable termination of an audit conducted pursuant to section 13411 of the HITECH Act; and
(3) mitigate remedies that would otherwise be included in an agreement between a covered entity or business associate and HHS to resolve potential violations of the HIPAA Security Rule (subparts A and C of 45 C.F.R. part 164).
“Recognized security practices” are defined as “the standards, guidelines, best practices, methodologies, procedures, and processes developed under section 2(c)(15) of the National Institute of Standards and Technology [(“NIST”)] Act, the approaches promulgated under section 405(d) of the Cybersecurity Act of 2015, and other programs and processes that address cybersecurity and that are developed, recognized, or promulgated through regulations under other statutory authorities.” Notably, consistent with the approach of the HIPAA Security Rule, the amendment does not mandate the adoption of any particular standard and expressly allows covered entities and business associates to decide what recognized security practices are best suited for their organization, consistent with the requirements of the HIPAA Security Rule.
Finally, the amendment does not require the adoption of recognized security practices, specifically stating that covered entities and business associates will not face liability “for electing not to engage in the recognized security practices defined by this section.” However, as the healthcare industry is often the target of cyberattacks, organizations subject to HIPAA should strongly consider implementing a robust cybersecurity framework – not only as a defense to possible regulatory enforcement after a data breach occurs, but also to lower the risk of a data breach in the first place.
On December 23, 2020, the European Commission (the “Commission”) published its inception impact assessment (“Inception Impact Assessment”) of policy options for establishing a European Health Data Space (“EHDS”). The Inception Impact Assessment is open for consultation until February 3, 2021, encouraging “citizens and stakeholders” to “provide views on the Commission’s understanding of the current situation, problem and possible solutions”.
On 18 January 2021, the UK Parliamentary Office of Science and Technology (“POST”)* published its AI and Healthcare Research Briefing about the use of artificial intelligence (“AI”) in the UK healthcare system (the “Briefing”). The Briefing considers the potential impacts of AI on the cost and quality of healthcare, and the challenges posed by the wider adoption of AI, including safety, privacy and health inequalities.
The Briefing summarises the different possible applications of AI in healthcare settings, which raises unique considerations for healthcare providers. It notes that AI, developed through machine learning algorithms, is not yet widely used within the NHS, but some AI products are at various stages of trial and evaluation. The areas of healthcare identified by the Briefing as having the potential for AI to be incorporated include (among others): interpretation of medical imaging, planning patients’ treatment, and patient-facing applications such as voice assistants, smartphone apps and wearable devices.
In addition to releasing the new EU Cybersecurity Strategy before the holidays (see our post here), the Commission published a revised Directive on measures for high common level of cybersecurity across the Union (“NIS2”) and a Directive on the resilience of critical entities (“Critical Entities Resilience Directive”). In this blog post, we summarize key points relating to NIS2, including more onerous security and incident reporting requirements; extending requirements to companies in the food, pharma, medical device, and chemical sectors, among others; and increased powers for regulators, including the ability to impose multi-million Euro fines.
The Commission is seeking feedback on NIS2 and the Critical Entities Resilience Directive, and recently extended its original deadline of early February to March 11, 2021 (responses can be submitted here and here).
On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.
The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.
On January 6, 2021 the UK’s Department of Health and Social Care (“DHSC”) published “A Guide to Good Practice for Digital and Data-Driven Health Technologies” (the “Guidance”). The Guidance updates the DHSC’s “Code of Conduct for Data-Driven Health and Care Technologies” (the “Code”) (for further information on the Code see our earlier blog, here).
As with the Code, the Guidance is a valuable resource to help parties understand what the National Health Service (“NHS”) looks for when acquiring digital and data-driven technologies for use in health and care.