On Wednesday, October 6th, Governor Gavin Newsom signed SB 41, the Genetic Information Privacy Act, which expands genetic privacy protections for consumers in California, including those interacting with direct-to-consumer (“DTC”) genetic testing companies. In a recent Covington Digital Health blog post, our colleagues discussed SB 41 and the growing patchwork of state genetic privacy laws across the United States. Read the post here.
Last Friday, October 1, the Protecting DNA Privacy Act (HB 833), a new genetic privacy law, went into effect in the state of Florida establishing four new crimes related to the unlawful use of another person’s DNA. While the criminal penalties in HB 833 are notable, Florida is not alone in its focus on increased genetic privacy protections. A growing number of states, including Utah, Arizona, and California, have begun developing a net of genetic privacy protections to fill gaps in federal and other state legislation, often focused on the privacy practices of direct-to-consumer (“DTC”) genetic testing companies. While some processing of genetic information is covered by federal law, the existing patchwork of federal genetic privacy protections do not clearly cover all forms of genetic testing, including DTC genetic tests.
Florida’s Protecting DNA Privacy Act
HB 833 was introduced in the Florida House of Representatives in February 2021 and signed by the governor in June. HB 833 applies to DNA samples collected from a person in Florida, and regulates any person’s use, retention, disclosure, or transfer of another person’s DNA samples or analysis. HB 833 amended Florida’s previous genetic privacy law, s. 760.40, F.S., to require that a person from whom the DNA is extracted gives “express consent” for a specified use of their genetic information. Under the previous law, analyzing a person’s DNA without their informed consent was a first degree misdemeanor; however, under HB 833, unlawful use may be a felony, depending on the provision of the law violated. Additionally, HB 833 states that the genetic information of the person from whom it is extracted is the “exclusive property” of that person to control. While HB 833 does impose notable criminal penalties for those that violate it, there are a number of exceptions (e.g., criminal prosecution or other legal processes, medical diagnosis or treatment, or conducting or preparing research subject to federal law, including the Common Rule and the Health Insurance Portability and Accountability Act (“HIPAA”)).
HB 833 is not the only change to genetic privacy protections recently made in Florida. In July 2020, Florida enacted HB 1189 that extended existing protections barring health insurers’ use of genetic information to long-term care and life insurers, including those that issue policies with disability insurance. Specifically, HB 1189 prohibits these insurers from canceling, limiting, denying, or differing premium rates based on genetic information. Further, HB 1189 bars the insurers from requiring or soliciting genetic information or test results, or using a consumer’s decision as to whether to take any actions related to genetic testing “for any insurance purpose.”
Additional DTC Genetic Privacy Laws and Bills
Earlier this year, Utah enacted SB 227, the Genetic Information Privacy Act, which imposes restrictions on DTC genetic testing companies, requiring specific privacy notices, security processes to protect consumer data, and the ability of a consumer to access and delete their own personal genetic data. Similar to Florida’s HB 833, Utah’s SB 227 contains a requirement that DTC genetic testing companies obtain express consent for the collection, use, or disclosure of consumer genetic data. Additionally, SB 227 specifically creates data de-identification requirements, including that the company in possession of the data impose specific measures to ensure data cannot be re-identified and “enters into legally enforceable contractual obligation that prohibits a recipient of the data from attempting to reidentify the data.”
Arizona also recently enacted HB 2069, the Genetic Information Privacy Act, which became effective last week on September 29. HB 2069 also focuses on DTC genetic testing companies and is similar to Utah’s SB 227 in many respects (e.g., initial consent must be obtained to collect and use genetic data, followed by certain separate express consents for purposes beyond the initial use), but not all (e.g., the standard de-identifying genetic data).
The California state legislature has passed SB 41, its own Genetic Information Privacy Act, which has many of the same consent, privacy, and security mechanisms present in the Utah and Arizona laws. The bill is currently sitting on the governor’s desk for signature. SB 41 creates its own de-identification standard similar to that created in Utah’s SB 227. Additionally, SB 41 requires a DTC genetic testing company comply with a consumer’s revocation of consent and destroy a consumer’s biological sample within 30 days of that revocation. SB 41 is almost identical to a bill vetoed by the Governor last year due to concerns over interference with COVID-19 test result reporting to public health authorities. However, SB 41 attempts to address the governor’s concerns by providing a carve-out for tests to diagnose a specific disease as long as genetic information obtained through this diagnostic test is treated as medical or protected health information.
Federal Genetic Privacy Landscape and Efforts
Current federal genetic privacy protections stem from several laws, including HIPAA, the Genetic Information Nondiscrimination Act of 2008, and the Federal Trade Commission’s ability to bring actions against “unfair” or “deceptive” business practices. However, these laws do not cover all forms of genetic testing that a consumer may engage with, including DTC genetic tests. There have been recent attempts to pass federal legislation to protect American’s personal health data. In January 2021, Senators Amy Klobuchar and Lisa Murkowski introduced S.24, the Protecting Personal Health Data Act, which aims to broadly protect personal health data not covered by HIPAA. Under S.24, “personal health data” includes “genetic information . . . that relates to past, present, or future physical or mental health or condition of an individual that identifies the individual or with respect to which there is a reasonable basis to believe that the information can be used to identify the individual” and states that DTC genetic testing services are covered as “services” under the bill. However, to date, since being introduced, S.24 has been referred to the U.S. Senate Committee on Health, Education, Labor, and Pensions, but it has not otherwise moved.
On September 15, 2021, CMS published a proposed rule to repeal the Medicare Coverage of Innovative Technology (MCIT) and Definition of “Reasonable and Necessary” Final Rule (“MCIT/RN Rule”), which was published on January 14, 2021 and was set to take effect on December 15, 2021. The MCIT/RN Rule would have created a pathway to provide nationwide Medicare coverage for medical devices simultaneous to a device’s receipt of market authorization under FDA’s Breakthrough Devices Program; Medicare coverage would have lasted for a period of four years after which a breakthrough device would either be covered through a National Coverage Determination or at the local-level by Medicare Administrative Contractor discretion. The MCIT/RN Rule was intended to address concerns that delay or uncertainty around Medicare coverage hampered beneficiary access to innovative technologies. Digital health technologies that are eligible for breakthrough designation and fall within a Medicare benefit category would have been eligible for the MCIT pathway.
In deciding to repeal the MCIT/RN Rule, CMS identified that FDA and CMS are guided by different statutory standards: FDA must determine whether a device is safe and effective, and CMS must determine whether a device is reasonable and necessary for the diagnosis or treatment of illness or injury. CMS explained that accelerated coverage for breakthrough devices would result in inadequate evidence that the device is reasonable and necessary. In particular, CMS noted its concern that FDA regulations do not require clinical studies to include Medicare beneficiaries and as a result, MCIT might result in coverage of devices that do not have data demonstrating reasonableness and necessity for Medicare patients.
More information about the proposed repeal can be found here: LINK
The Medicines & Healthcare products Regulatory Agency (“MHRA”) has published a “Consultation on the future regulation of medical devices in the United Kingdom” (the “Consultation”), which will run until 25 November 2021. The consultation sets out proposed changes to the UK medical device regulatory framework with the aim to “develop a world-leading future regime for medical devices that prioritises patient safety while fostering innovation.”
Separately, the MHRA has published a work programme on software and AI as a medical device to deliver a regulatory framework that makes sure that the UK is the home of responsible innovation for medical device software. Any legislative change proposed by the work programme will build upon the wider reforms to medical device regulation being consulted upon as a part of the Consultation.
The MHRA intends that any amendments to the UK medical device framework will come into force in July 2023. This aligns with the date when UKCA marking will become mandatory in the UK and when EU CE marks will no longer be recognized. The MHRA has made clear that it will provide adequate transition periods before adopting any new requirements.
All interested parties are encouraged to contribute to shaping the future regulation of medical devices in the UK by responding to the MHRA’s consultation before the deadline (25 November 2021).
The MHRA’s overarching objectives are to develop a regime for medical devices that enables:
- “Improved patient and public safety;
- Greater transparency of regulatory decision making and medical device information;
- Close alignment with international best practice, and;
- More flexible, responsive and proportionate regulation of medical devices.” (emphasis added)
The Consultation sets out a proposal for a future UK-wide regime to regulate medical devices, which would run in parallel to existing or future EU rules. However, the consultation acknowledges that the MHRA seeks “…greater alignment with … international regimes rather than bringing in higher regulatory burdens.”
The Consultation sets out four “significant areas” that the new regime will focus on, namely:
- “creating new access pathways to support innovations
- a unique, innovative, and ambitious framework for regulating software and artificial intelligence as medical devices
- reforming IVD regulation
- becoming a sustainability pioneer – through safe reuse and remanufacture”
The MHRA has further split these areas into 15 Chapters, which cover: (1) the scope of the regulation; (2) classification; (3) economic operators; (4) registration and UDI; (5) approved bodies; (6) conformity assessments; (7) clinical investigations/performance studies; (8) post-market surveillance, vigilance and market surveillance; (9) in vitro diagnostic medical devices; (10) software medical devices; (11) implantable devices; (12) other product specific changes; (13) environmental sustainability and public health impacts; (14) routes to market; and (15) transitional arrangements.
For each of these chapters, the Consultation indicates how the MHRA proposes to update the UK regulatory framework in line with the overarching objectives detailed above. Many of the principles appear to align with those in the EU Medical Devices Regulation 2017/745 (“MDR”) and EU IVD Medical Devices Regulation (EU) 2017/746 (“IVDR”), which is not surprising given the UK was a key player in the development of the MDR and IVDR. It is clear, however, that the UK intends for broader alignment with international standards (e.g. the IMDRF) rather than simply aligning with the EU regulatory framework.
Impact on Software and AI
The MHRA acknowledges that software and AI are developing fast and play an “increasingly prominent role within health systems” and that UK device regulation needs to be updated to both protect patient safety and also keep up with technological advances.
Chapter 10 of the Consultation sets out proposed changes for “Software as a Medical Device (SaMD), including AI as a medical device (AIaMD).” The Consultation provides a detailed overview of changes, including defining software, introducing requirements for persons selling SaMD via electronic means, adopting the IMDRF Risk Categorization for SaMD and defining specific requirements for AIaMD (amongst others).
Separately, the MHRA published details of an extensive work programme to inform regulatory changes for software and AI devices. These aim to ensure that:
1. “The requirements for software and AI as a medical device provide a high degree of assurance that these devices are acceptably safe and function as intended, thereby protecting patients and public
2. That the requirements are clear, supported by both clarificatory guidance and streamlined processes that work for software, as well as bolstered with the tools to demonstrate compliance, for instance, via the designation of standards
3. That friction is taken out of the market by working with key partners such as the National Institute for Health and Care Excellence and NHSX to align, de-duplicate, and combine requirements, ultimately providing a joined-up offer for digital health within the UK.” (emphasis added)
The programme includes 11 work packages over two workstreams. The work packages relate to: (1) qualification; (2) classification; (3) pre-market; (4) post-market; (5) cyber secure medical devices; (6) innovative access; (7) SaMD airlock; (8) mobile health and apps; (9) AI rigour; (10) AI interpretability; and (11) AI adaptivity.
The MHRA plans to deliver these work packages between autumn 2021 and summer 2023. It is anticipated that much of the reform from the work packages will be in the form of clarificatory guidance, standards, or processes rather than secondary legislation. Any legislative change proposed by any work package for software/AI will build upon wider reforms to medical device regulation being consulted upon as a part of the Consultation.
On September 15, the Federal Trade Commission (“FTC”) adopted, on a 3-2 party-line vote, a policy statement that takes a broad view of which health apps and connected devices are subject to the FTC’s Health Breach Notification Rule (the “Rule”) and what triggers the Rule’s notification requirement.
The Rule was promulgated in 2009 under the Health Information Technology for Economic and Clinical Health (“HITECH”) Act. Under the Rule, vendors of personal health record that are not otherwise regulated under the Health Insurance Portability and Accountability Act (“HIPAA”) are required to notify individuals, the FTC, and, in some cases, the media following a breach involving unsecured identifiable health information. 16 C.F.R. §§ 318.3, 318.5. Third-party service providers also are required to notify covered vendors of any breach. 16 C.F.R. § 318.3.
On August 23, 2021 the UK Government published its report entitled “Harnessing technology for the long-term sustainability of the UK’s healthcare system” (the “Report”). The Report calls for system-wide adoption of technology in the UK health system to enable transformative change that will benefit the health and wellbeing of the UK and promote economic growth. However, the Report cautions that technology alone cannot overcome the inequalities that lead to disparities in health outcomes and that digital tools for health should be accessible to all, or risk exacerbating health inequalities as a result of a “digital divide”. The Report notes how the COVID-19 pandemic has both exposed the limitations of the current system and highlighted the capability of the UK National Health Service (“NHS”) for responding with flexibility and agility. The Report also makes several recommendations to the UK Government, including to set up “Demonstrators” to test the system-wide application of healthcare technologies.
The Report arrives ahead of the expected publication of the UK Government’s review into the use of health data for research and analysis (see our earlier blog here), and outlines the opportunities presented by technology in the context of public healthcare systems.
The International Coalition of Medicines Regulatory Authorities (“ICMRA”) has published a report on the use of artificial intelligence (“AI”) to develop medicines (the “AI Report”) that provides a series of recommendations on how regulators and stakeholders can address challenges posed by AI. The ICMRA notes that there are numerous opportunities to apply AI to medicines development, but that AI poses a number of challenges to existing regulatory frameworks. The AI Report discusses these opportunities and challenges in detail based on several case studies, and provides a set of recommendations for implementation by the ICMRA and its member authorities, which includes the European Medicines Agency (the “EMA”), the USA’s Food and Drug Administration, and the World Health Organisation. Based on the AI report, we expect to see an increased focus on adapting regulatory frameworks to deal with AI products going forwards both on an international and national level.
Legislation that would amend California’s Confidentiality of Medical Information Act (“CMIA”) is working its way through California’s Senate and passed in the Senate Health Committee earlier this week. The proposed bill passed in the state’s Assembly back in April. Introduced by Democratic California Assemblymember Edwin Chau, who sits on the Privacy and Consumer Protection Committee, the proposed legislation (AB 1436) expands the definition of “provider of health care.” Under the CMIA, providers of health care are subject to various obligations, including provisions that restrict the disclosure of medical information without a prior valid authorization, subject to certain exceptions. Continue Reading Proposed Bill Would Expand the Scope of the CMIA
On June 22, 2021, Congressional leaders Diana DeGette (D-DO) and Fred Upton (R-MI) released a bipartisan follow-up to the 2016 21st Century Cures Act. “Cures 2.0”—a “discussion draft” at this stage—is intended to build upon the Cures Act. The draft lays out several notable policies related to digital health, real-world data/evidence (RWD/E), and telehealth, among other provisions:
- Sec. 301: Report on Collaboration and Alignment in Regulating Digital Health Technologies. This provision requires FDA to submit a report to Congress on the efforts to ensure collaboration and alignment across FDA offices with respect to the regulation of digital health technologies. The report must include recommendations on topics such as the use of digital endpoints for regulatory review, the use of digital health technologies in patient-focused development of products, and the use and validation of digital health technology tools (e.g., wearable devices, virtual reality headsets, artificial intelligence-/machine learning-based analytics, cloud services, etc.). The report also must describe how FDA coordinates with foreign regulators to ensure harmonized regulation and use of such digital health technologies.
- Sec. 302: Grants for Novel Trial Designs and Other Innovations in Drug Development. Sec. 302 directs FDA to award grants for “incorporating complex adaptive and other novel trial designs into clinical protocols and applications for drugs pursuant to an exemption for investigational use under section 505(i) of the Federal Food, Drug, and Cosmetic Act . . . or section 351(a) of the Public Health Service Act” and “the collection of patient experience data with respect to drugs and the use of such data and related information in drug development.” In awarding grants, the provision directs FDA to prioritize the incorporation of digital health technologies and RWE in drug development.
- Sec. 304: Increasing Use of Real-World Evidence. This section builds on FDA’s mandate in the Cures Act to establish an RWE Program by requiring (1) FDA to issue guidance on the use of RWE in evaluating the safety and effectiveness of drugs approved pursuant to certain expedited pathways; (2) HHS to identify and implement approaches for using RWE; and (3) HHS to establish a RWE Task Force to coordinate the programs and activities of the Department with regard to the collection and use of RWE. The RWE Task Force is required to develop and periodically update recommendations on ways to encourage patients to engage in generation of RWE and participate in post-approval clinical trials for collection of RWE.
- Sec. 402: Strategies to Increase Access to Telehealth Under Medicaid and Children’s Health Insurance Program. This provision requires HHS to provide guidance to states on strategies for facilitating access to telehealth under the Medicaid and Children’s Health Insurance programs. This provision also requires studies to be conducted evaluating the impact of telehealth and collaboration among agencies with respect to telehealth services.
Cures 2.0 demonstrates a recognition of the importance of evolving digital health and health data analytics in medical innovation and the delivery of healthcare. Interested stakeholders should consider providing comments to Reps. DeGette and Upton. Enactment of Cures 2.0, whether as standalone legislation or as part of next year’s FDA medical product user fee reauthorization, would create new opportunities for FDA and HHS to take additional steps to advance digital health policies and initiatives.
In April 2021, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Regulation”), which would establish rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU. The proposal, comprising 85 articles and nine annexes, is part of a wider package of Commission initiatives aimed at positioning the EU as a world leader in trustworthy and ethical AI and technological innovation.
The Commission’s objectives with the Regulation are twofold: to promote the development of AI technologies and harness their potential benefits, while also protecting individuals against potential threats to their health, safety, and fundamental rights posed by AI systems. To that end, the Commission proposal focuses primarily on AI systems identified as “high-risk,” but also prohibits three AI practices and imposes transparency obligations on providers of certain non-high-risk AI systems as well. Notably, it would impose significant administrative costs on high-risk AI systems of around 10 percent of the underlying value, based on compliance, oversight, and verification costs. This blog highlights several key aspects of the proposal. Continue Reading European Commission Proposes New Artificial Intelligence Regulation