As we kick off 2022, several recent developments from FDA suggest that this year could be pivotal for the Agency’s digital health priorities.  From new FDA offices and artificial intelligence guidance, to FDA’s user fee commitments and must-pass legislation in Congress, this post outlines five key issues to watch in 2022 related to FDA and digital health.  For all of these issues, stakeholders should be mindful of areas where digital health can help address some of the health disparities highlighted by the pandemic (e.g., ability to reach more clinical trial participants using wearables and other technologies, use of real-world evidence to better understand treatment effects in underrepresented populations, at-home software and diagnostic solutions).

1.         FDA’s Digital Transformation

Significant work continues within the Agency to advance digital health priorities, including both organizational and regulatory policy changes.

As an organization, FDA continues to evolve in an effort to keep pace with trends in digital health and data analytics.  For example, on September 15, 2021, FDA announced a new Office of Digital Transformation (ODT), which is tasked with advancing FDA’s overarching technology and data modernization efforts.  ODT sits in the FDA Office of the Commissioner and encompasses the Agency’s information technology, data management, and cybersecurity functions.  The formation of ODT follows two years of modernization efforts, including the 2019 Technology Modernization Action Plan and the 2021 Data Modernization Action Plan.  FDA recently named Vid Desai as the Director of ODT, and the Agency’s FY 2022 budget request included funding to support these data modernization efforts, further demonstrating the commitment to these institutional changes.  At the center level, FDA’s Center for Devices and Radiological Health (CDRH) launched the Digital Health Center of Excellence (DHCoE) in 2020 to help coordinate digital health projects at CDRH and enhance coordination with other agency centers.  On the data analytics front, the Real-World Evidence (RWE) Subcommittee composed of CDER and CBER officials continues to advance the use of real-world data/evidence (RWD/E) in agency decision making, and in early January 2022, FDA highlighted proposed changes to CBER’s Office of Biostatistics and Epidemiology aimed at positioning CBER to “advance real-world evidence priorities for biologics,” noting that “harnessing the power of real-world evidence” is a priority for the Agency.

On the regulatory policy front, FDA continues to issue new policies.  For example, on December 22, 2021, CBER, CDER, and CDRH issued a draft guidance on the use of “Digital Health Technologies for Remote Data Acquisition in Clinical Investigations,” which addresses the use of computing platforms, software, and sensors to facilitate remote data acquisition during clinical investigations.  CDRH’s FY 2022 agenda prioritizes other software-related guidance documents, including final guidance on Clinical Decision Support (CDS) Software and draft guidances on risk categorizations for Software as a Medical Device (SaMD) and the content of premarket submissions for SaMD (which FDA published early in FY 2022, on November 4, 2021).  It also is possible that FDA will apply some of the lessons from FDA’s Pre-Cert Pilot Program to develop new approaches for software developers.  Drug sponsors continue to watch what (if anything) FDA will do with the proposed November 2018 PDURS Framework regarding “prescription drug-use-related software.”  In sum, expect an active FDA in 2022.

2.        AI/ML-Based Software Regulation

FDA recognizes the potential for AI/ML-based software to transform healthcare and has outlined several priorities regarding AI/ML-based software as a medical device (AI/ML-based SaMD).  In a January 2021 AI/ML-Based SaMD Action Plan, FDA recognized that adaptive AI/ML-based SaMD raise unique regulatory issues, such as how to manage device modifications after FDA clearance, and how to determine which modifications trigger the need for FDA premarket review.  FDA has proposed a regulatory framework to potentially allow for modifications to algorithms based on real-world, postmarket learning and adaptations while maintaining safety and effectiveness.  The 2021 Action Plan addressed stakeholder feedback on an earlier discussion paper, and promised to update the proposed framework for AI/ML-based SaMD, including through issuance of draft guidance.  CDRH’s FY 2022 agenda included a proposed guidance document for premarket submissions that outlined a change control plan for AI/ML-based SaMD.  To provide greater transparency, CDRH also launched an Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Device List.  The list—though not exhaustive—contains publicly available information on AI/ML-enabled devices marketed in the U.S., many of which currently have “locked” algorithms (i.e., algorithms that do not change without human intervention).

On the global stage, FDA, Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) identified 10 guiding principles that can inform the development of Good Machine Learning Practice (GMLP) in an October 2021 guidance titled, “Good Machine Learning Practice for Medical Device Development: Guiding Principles.”  The guiding principles aim to promote safe, effective, and high-quality AI/ML-based medical devices.  For more information on these principles, see our previous post here.  Also, the International Medical Device Regulators Forum (IMDRF) AI Working Group released a draft guidance on September 16, 2021, titled “Machine Learning Enabled Medical Devices – a Subset of Artificial Intelligence: Key Terms and Definitions.” This guidance aims to establish relevant terms and definitions across the total product lifecycle to promote consistency and support global harmonization efforts.  Bottom line, watch for continued emphasis in 2022 on developing the appropriate regulatory framework for AI/ML-based SaMD.

3.        PDUFA & MDUFA Reauthorization in Congress

Digital health priorities are embedded in the Agency’s user fee commitments for fiscal years (FYs) 2023 through 2027.  As background, the Prescription Drug User Fee Act (most recently reauthorized as PDUFA VI) and the Medical Device User Fee Act (most recently reauthorized as MDUFA IV) sunset every five years, unless reauthorized by Congress, and PDUFA VI and MDUFA IV expire on September 30, 2022.  The reauthorization of PDUFA and MDUFA is regarded as “must-pass” legislation in Congress, given the critical nature of user fees to FDA’s activities.

As part of the reauthorization process, FDA has negotiated user fee commitment letters with the relevant regulated industries, taking input from patient and consumer groups along the way.  These commitment letters outline the performance goals agreed to by the Agency for the next five fiscal years, if Congress reauthorizes the associated user fee levels.

In its PDUFA VII commitment letter, published August 23, 2021, FDA committed to expanding the role of digital health technologies in drug development, drug reviews, and decentralized clinical trials.  Between FYs 2023 through 2027, FDA commits to establish a digital health technology framework, identify demonstration projects to inform evaluations of digital health technologies, issue guidance on the use of digital health technologies in clinical trials, and expand its digital health staff and expertise.  FDA also plans to host public meetings to gather input on issues related to use of digital health technologies in regulatory decision making.  As further discussed under Issue 5, FDA also committed to take steps to advance the use of RWD/E.

FDA has yet to publish the MDUFA V commitment letter, but meeting minutes from industry and stakeholder discussions suggest that digital health is a topic of interest.  Industry should watch for the MDUFA V commitment letter and then monitor whether Congress adds additional legislative changes on digital health topics for both drugs and devices as part of the 2022 user fee reauthorization.

4.        Software-Related Policies in Cures 2.0 and the VALID Act

Two key bills being considered in Congress include provisions that would impact FDA’s digital health policies: the VALID Act and Cures 2.0.

First, on June 24, 2021, Sens. Michael Bennet (D-CO) and Richard Burr (R-NC) and Reps. Diana DeGette (D-CO) and Larry Buschon (R-IN) reintroduced a revised version of the Verifying Accurate Leading-edge IVCT Development (VALID) Act, following up on its initial introduction in March 2020.  The VALID Act’s definition of “in vitro clinical test” currently encompasses software used in diagnostic testing.  If enacted, this could result in software used in connection with diagnostic tests being regulated under the new VALID Act framework.  Stakeholders should monitor the legislation and, if it is enacted, how that new framework would intersect with FDA’s other digital health policies, such as those relating to CDS software.

Second, Congressional leaders Diana DeGette (D-DO) and Fred Upton (R-MI) are working on a bipartisan follow-up to the 2016 21st Century Cures Act, deemed “Cures 2.0.”  As discussed in a previous post, Cures 2.0 was introduced in the House on November 17, 2021 and lays out several notable policies related to digital health, RWD/E, and telehealth, among other provisions.

Stakeholders should monitor how these legislative proposals advance in Congress this year, including as potential amendments to the “must-pass” FDA user fee reauthorization discussed in Issue 3.

5.        FDA’s Real-World Evidence (RWE) Program

FDA continues to advance the use of RWD/E as part of the agency’s regulatory decision making.  As background, CDER and CBER published a framework in 2018 for FDA’s RWE Program for human drugs and biological products, as required by section 3022 of the 21st Century Cures Act.  The Cures Act also required FDA to issue guidance documents by December 13, 2021, regarding the circumstances under which drug sponsors may rely on RWD/E and the appropriate standards and methodologies for the collection and analysis of RWD/E.  In line with this requirement, FDA recently published four significant draft guidance documents:

  1. RWD: Assessing Electronic Health Records and Medical Claims Data To Support Regulatory Decision-Making for Drug and Biological Products (September 2021)
  2. Data Standards for Drug and Biological Product Submissions Containing RWD (October 2021)
  3. Real-World Data: Assessing Registries to Support Regulatory Decision-Making for Drug and Biological Products (November 2021)
  4. Considerations for the Use of Real-World Data and Real-World Evidence to Support Regulatory Decision-Making for Drug and Biological Products (December 2021)

CDRH has been active in this space as well, issuing a guidance in 2017 on the use of RWD/E to support regulatory decision-making for medical devices and issuing a report last year outlining examples of RWD/E used in various regulatory decisions involving devices.

As stated in FDA’s PDUFA VII commitment letter (and as previewed under Issue 3), FDA intends to launch a pilot “Advancing Real-World Evidence (RWE) Program” with three key goals: (1) to identify approaches for generating RWE that meet regulatory requirements; (2) to develop agency processes that promote consistent decision-making and shared learning regarding RWE; and (3) to promote awareness of characteristics of RWE that can support regulatory decisions by allowing FDA to discuss study designs considered in the Advancing RWE Program in a public forum.  As part of this Pilot, sponsors can apply to participate in the Advancing RWE Program meetings, which will provide an optional pathway for submitting RWE proposals.  Sponsors who do not participate in the pilot program will still have an opportunity to engage with the Agency on RWE issues through existing channels.  In its PDUFA VII commitment letter, FDA also commits to reporting out information regarding RWE submissions to CDER and CBER by June 2024 and updating RWE guidance (or drafting new guidance) reflecting FDA’s experience with the Pilot Program by December 2026.

Bottom line, watch for additional FDA decisions and actions on the RWD/E front in 2022, including as the Agency prepares for its 2023 PDUFA VII commitments.

On 27 October 2021, the U.S. Food and Drug Administration (“FDA”), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (“MHRA”) (together the “Regulators”) jointly published 10 guiding principles to inform the development of Good Machine Learning Practice (“GMLP”) for medical devices that use artificial intelligence and machine learning (“AI/ML”).

Purpose

AI and ML have the “potential to transform health care” through their ability to analyse vast amounts of data and learn from real-world use.  However, these technologies also pose unique challenges, given their complexity and the constantly evolving, data-driven nature of their development.  The Regulators formed the guiding principles to “help promote safe, effective, and high-quality medical devices that use . . . AI/ML” and to “cultivate future growth” in this fast paced field.

The Regulators predict that the guiding principles could be used to: (i) adopt good practices from other sectors; (ii) tailor these practices to the medical technology/healthcare sector; and (iii) create new practices specific to the medical technology/healthcare sector.  The Regulators expect these joint principles to inform broader international engagements as well.

The 10 Guiding Principles

The guidance published by the Regulators set out the 10 principles in full; however, in short, they recommend:

  1. Leveraging multi-disciplinary expertise throughout the total product life cycle
  2. Implementing good software engineering and security practices
  3. Ensuring clinical study participants and data sets are representative of the intended patient population
  4. Making training data sets independent of test sets
  5. Basing selected reference datasets upon best available methods
  6. Tailoring the model design to the available data and ensuring it reflects the intended use of the device
  7. Placing focus on the performance of the human-AI team
  8. Ensuring testing demonstrates device performance during clinically relevant conditions
  9. Providing users with clear, essential information
  10. Monitoring deployed models for performance and managing re-training risks

These principles cover the entire life cycle of devices with the aim of ensuring safety and efficacy.  The Regulators have focused on use of appropriate datasets and carrying out sufficient testing before marketing AI/ML-based devices.  These guiding principles set out an ongoing recommendation to manage risks, which will involve monitoring and potentially re-training AI/ML-based devices after deployment.

These principles are merely a starting point.  The Regulators stated, “[a]s the AI/ML medical device field evolves, so too must GMLP best practice and consensus standards.”

Possible Impact & International Considerations

AI and ML are clearly top priorities from a global health regulatory perspective.  The Regulators expect this collaboration to lead to further and broader international collaborative work.  As noted above, the Regulators expect these guidelines to evolve and emphasize the importance of “strong partnerships with [their] international public health partners.”

As one example, the guiding principles identify areas of possible collaboration for the International Medical Device Regulators Forum (“IMDRF”), international standards organizations, and other collaborative bodies.  These areas include “research, creating educational tools and resources, international harmonization, and consensus standards.”

This collaboration is important as it follows on from the individual work each agency has been doing in this space.  For example, MHRA has consulted on the future regulation of medical devices in the UK, including by developing a Work Programme for Software and AI-based Medical Devices (which we previously discussed in our blog post).  FDA has also been active in the AI/ML space, and several more FDA digital health developments are on the horizon for 2022.  Through this international regulatory collaboration it appears the Regulators are working towards a united front through close alignment on best practice and international regimes.  It also shows, for example, that the UK is considering international regimes broadly, rather than simply aligning with the European Union.

In sum, it appears there is an appetite for further international regulatory collaboration, so watch this space for the potential development of more detailed and sector specific international standards and practices for AI/ML-based technologies.

 

On Wednesday, October 6th, Governor Gavin Newsom signed SB 41, the Genetic Information Privacy Act, which expands genetic privacy protections for consumers in California, including those interacting with direct-to-consumer (“DTC”) genetic testing companies.  In a recent Covington Digital Health blog post, our colleagues discussed SB 41 and the growing patchwork of state genetic privacy laws across the United States.  Read the post here.

Last Friday, October 1, the Protecting DNA Privacy Act (HB 833), a new genetic privacy law, went into effect in the state of Florida establishing four new crimes related to the unlawful use of another person’s DNA.  While the criminal penalties in HB 833 are notable, Florida is not alone in its focus on increased genetic privacy protections.  A growing number of states, including Utah, Arizona, and California, have begun developing a net of genetic privacy protections to fill gaps in federal and other state legislation, often focused on the privacy practices of direct-to-consumer (“DTC”) genetic testing companies.  While some processing of genetic information is covered by federal law, the existing patchwork of federal genetic privacy protections do not clearly cover all forms of genetic testing, including DTC genetic tests.

Florida’s Protecting DNA Privacy Act

HB 833 was introduced in the Florida House of Representatives in February 2021 and signed by the governor in June.  HB 833 applies to DNA samples collected from a person in Florida, and regulates any person’s use, retention, disclosure, or transfer of another person’s DNA samples or analysis.  HB 833 amended Florida’s previous genetic privacy law, s. 760.40, F.S., to require that a person from whom the DNA is extracted gives “express consent” for a specified use of their genetic information.  Under the previous law, analyzing a person’s DNA without their informed consent was a first degree misdemeanor; however, under HB 833, unlawful use may be a felony, depending on the provision of the law violated.  Additionally, HB 833 states that the genetic information of the person from whom it is extracted is the “exclusive property” of that person to control.  While HB 833 does impose notable criminal penalties for those that violate it, there are a number of exceptions (e.g., criminal prosecution or other legal processes, medical diagnosis or treatment, or conducting or preparing research subject to federal law, including the Common Rule and the Health Insurance Portability and Accountability Act (“HIPAA”)).

HB 833 is not the only change to genetic privacy protections recently made in Florida.  In July 2020, Florida enacted HB 1189 that extended existing protections barring health insurers’ use of genetic information to long-term care and life insurers, including those that issue policies with disability insurance.  Specifically, HB 1189 prohibits these insurers from canceling, limiting, denying, or differing premium rates based on genetic information.  Further, HB 1189 bars the insurers from requiring or soliciting genetic information or test results, or using a consumer’s decision as to whether to take any actions related to genetic testing “for any insurance purpose.”

Additional DTC Genetic Privacy Laws and Bills

Earlier this year, Utah enacted SB 227, the Genetic Information Privacy Act, which imposes restrictions on DTC genetic testing companies, requiring specific privacy notices, security processes to protect consumer data, and the ability of a consumer to access and delete their own personal genetic data.  Similar to Florida’s HB 833, Utah’s SB 227 contains a requirement that DTC genetic testing companies obtain express consent for the collection, use, or disclosure of consumer genetic data.  Additionally, SB 227 specifically creates data de-identification requirements, including that the company in possession of the data impose specific measures to ensure data cannot be re-identified and “enters into legally enforceable contractual obligation that prohibits a recipient of the data from attempting to reidentify the data.”

Arizona also recently enacted HB 2069, the Genetic Information Privacy Act, which became effective last week on September 29.  HB 2069 also focuses on DTC genetic testing companies and is similar to Utah’s SB 227 in many respects (e.g., initial consent must be obtained to collect and use genetic data, followed by certain separate express consents for purposes beyond the initial use), but not all (e.g., the standard de-identifying genetic data).

The California state legislature has passed SB 41, its own Genetic Information Privacy Act, which has many of the same consent, privacy, and security mechanisms present in the Utah and Arizona laws.  The bill is currently sitting on the governor’s desk for signature.  SB 41 creates its own de-identification standard similar to that created in Utah’s SB 227.  Additionally, SB 41 requires a DTC genetic testing company comply with a consumer’s revocation of consent and destroy a consumer’s biological sample within 30 days of that revocation.  SB 41 is almost identical to a bill vetoed by the Governor last year due to concerns over interference with COVID-19 test result reporting to public health authorities.  However, SB 41 attempts to address the governor’s concerns by providing a carve-out for tests to diagnose a specific disease as long as genetic information obtained through this diagnostic test is treated as medical or protected health information.

Federal Genetic Privacy Landscape and Efforts

Current federal genetic privacy protections stem from several laws, including HIPAA, the Genetic Information Nondiscrimination Act of 2008, and the Federal Trade Commission’s ability to bring actions against “unfair” or “deceptive” business practices.  However, these laws do not cover all forms of genetic testing that a consumer may engage with, including DTC genetic tests.  There have been recent attempts to pass federal legislation to protect American’s personal health data.  In January 2021, Senators Amy Klobuchar and Lisa Murkowski introduced S.24, the Protecting Personal Health Data Act, which aims to broadly protect personal health data not covered by HIPAA.  Under S.24, “personal health data” includes “genetic information . . . that relates to past, present, or future physical or mental health or condition of an individual that identifies the individual or with respect to which there is a reasonable basis to believe that the information can be used to identify the individual” and states that DTC genetic testing services are covered as “services” under the bill.  However, to date, since being introduced, S.24 has been referred to the U.S. Senate Committee on Health, Education, Labor, and Pensions, but it has not otherwise moved.

On September 15, 2021, CMS published a proposed rule to repeal the Medicare Coverage of Innovative Technology (MCIT) and Definition of “Reasonable and Necessary” Final Rule (“MCIT/RN Rule”), which was published on January 14, 2021 and was set to take effect on December 15, 2021.  The MCIT/RN Rule would have created a pathway to provide nationwide Medicare coverage for medical devices simultaneous to a device’s receipt of market authorization under FDA’s Breakthrough Devices Program; Medicare coverage would have lasted for a period of four years after which a breakthrough device would either be covered through a National Coverage Determination or at the local-level by Medicare Administrative Contractor discretion.  The MCIT/RN Rule was intended to address concerns that delay or uncertainty around Medicare coverage hampered beneficiary access to innovative technologies.  Digital health technologies that are eligible for breakthrough designation and fall within a Medicare benefit category would have been eligible for the MCIT pathway.

In deciding to repeal the MCIT/RN Rule, CMS identified that FDA and CMS are guided by different statutory standards: FDA must determine whether a device is safe and effective, and CMS must determine whether a device is reasonable and necessary for the diagnosis or treatment of illness or injury.  CMS explained that accelerated coverage for breakthrough devices would result in inadequate evidence that the device is reasonable and necessary.  In particular, CMS noted its concern that FDA regulations do not require clinical studies to include Medicare beneficiaries and as a result, MCIT might result in coverage of devices that do not have data demonstrating reasonableness and necessity for Medicare patients.

More information about the proposed repeal can be found here: LINK

The Medicines & Healthcare products Regulatory Agency (“MHRA”) has published a “Consultation on the future regulation of medical devices in the United Kingdom” (the “Consultation”), which will run until 25 November 2021.  The consultation sets out proposed changes to the UK medical device regulatory framework with the aim to “develop a world-leading future regime for medical devices that prioritises patient safety while fostering innovation.

Separately, the MHRA has published a work programme on software and AI as a medical device to deliver a regulatory framework that makes sure that the UK is the home of responsible innovation for medical device software.  Any legislative change proposed by the work programme will build upon the wider reforms to medical device regulation being consulted upon as a part of the Consultation.

The MHRA intends that any amendments to the UK medical device framework will come into force in July 2023.  This aligns with the date when UKCA marking will become mandatory in the UK and when EU CE marks will no longer be recognized.  The MHRA has made clear that it will provide adequate transition periods before adopting any new requirements.

All interested parties are encouraged to contribute to shaping the future regulation of medical devices in the UK by responding to the MHRA’s consultation before the deadline (25 November 2021).

Consultation Summary

The MHRA’s overarching objectives are to develop a regime for medical devices that enables:

  • Improved patient and public safety;
  • Greater transparency of regulatory decision making and medical device information;
  • Close alignment with international best practice, and;
  • More flexible, responsive and proportionate regulation of medical devices.” (emphasis added)

The Consultation sets out a proposal for a future UK-wide regime to regulate medical devices, which would run in parallel to existing or future EU rules.  However, the consultation acknowledges that the MHRA seeks “…greater alignment with … international regimes rather than bringing in higher regulatory burdens.”

The Consultation sets out four “significant areas” that the new regime will focus on, namely:

  • creating new access pathways to support innovations
  • a unique, innovative, and ambitious framework for regulating software and artificial intelligence as medical devices
  • reforming IVD regulation
  • becoming a sustainability pioneer – through safe reuse and remanufacture

The MHRA has further split these areas into 15 Chapters, which cover: (1) the scope of the regulation; (2) classification; (3) economic operators; (4) registration and UDI; (5) approved bodies; (6) conformity assessments; (7) clinical investigations/performance studies; (8) post-market surveillance, vigilance and market surveillance; (9) in vitro diagnostic medical devices; (10) software medical devices; (11) implantable devices; (12) other product specific changes; (13) environmental sustainability and public health impacts; (14) routes to market; and (15) transitional arrangements.

For each of these chapters, the Consultation indicates how the MHRA proposes to update the UK regulatory framework in line with the overarching objectives detailed above.  Many of the principles appear to align with those in the EU Medical Devices Regulation 2017/745 (“MDR”) and EU IVD Medical Devices Regulation (EU) 2017/746 (“IVDR”), which is not surprising given the UK was a key player in the development of the MDR and IVDR.  It is clear, however, that the UK intends for broader alignment with international standards (e.g. the IMDRF) rather than simply aligning with the EU regulatory framework.

Impact on Software and AI

The MHRA acknowledges that software and AI are developing fast and play an “increasingly prominent role within health systems” and that UK device regulation needs to be updated to both protect patient safety and also keep up with technological advances.

Chapter 10 of the Consultation sets out proposed changes for “Software as a Medical Device (SaMD), including AI as a medical device (AIaMD).”  The Consultation provides a detailed overview of changes, including defining software, introducing requirements for persons selling SaMD via electronic means, adopting the IMDRF Risk Categorization for SaMD and defining specific requirements for AIaMD (amongst others).

Separately, the MHRA published details of an extensive work programme to inform regulatory changes for software and AI devices.  These aim to ensure that:

1.     “The requirements for software and AI as a medical device provide a high degree of assurance that these devices are acceptably safe and function as intended, thereby protecting patients and public

2.     That the requirements are clear, supported by both clarificatory guidance and streamlined processes that work for software, as well as bolstered with the tools to demonstrate compliance, for instance, via the designation of standards

3.     That friction is taken out of the market by working with key partners such as the National Institute for Health and Care Excellence and NHSX to align, de-duplicate, and combine requirements, ultimately providing a joined-up offer for digital health within the UK.” (emphasis added)

The programme includes 11 work packages over two workstreams. The work packages relate to: (1) qualification; (2) classification; (3) pre-market; (4) post-market; (5) cyber secure medical devices; (6) innovative access; (7) SaMD airlock; (8) mobile health and apps; (9) AI rigour; (10) AI interpretability; and (11) AI adaptivity.

The MHRA plans to deliver these work packages between autumn 2021 and summer 2023.  It is anticipated that much of the reform from the work packages will be in the form of clarificatory guidance, standards, or processes rather than secondary legislation. Any legislative change proposed by any work package for software/AI will build upon wider reforms to medical device regulation being consulted upon as a part of the Consultation.

On September 15, the Federal Trade Commission (“FTC”) adopted, on a 3-2 party-line vote, a policy statement that takes a broad view of which health apps and connected devices are subject to the FTC’s Health Breach Notification Rule (the “Rule”) and what triggers the Rule’s notification requirement.

The Rule was promulgated in 2009 under the Health Information Technology for Economic and Clinical Health (“HITECH”) Act.  Under the Rule, vendors of personal health record that are not otherwise regulated under the Health Insurance Portability and Accountability Act (“HIPAA”) are required to notify individuals, the FTC, and, in some cases, the media following a breach involving unsecured identifiable health information.  16 C.F.R. §§ 318.3, 318.5.  Third-party service providers also are required to notify covered vendors of any breach.  16 C.F.R. § 318.3.

Continue Reading FTC Adopts Policy Statement on Privacy Breaches by Health Apps and Connected Devices

On August 23, 2021 the UK Government published its report entitled “Harnessing technology for the long-term sustainability of the UK’s healthcare system” (the “Report”). The Report calls for system-wide adoption of technology in the UK health system to enable transformative change that will benefit the health and wellbeing of the UK and promote economic growth.  However, the Report cautions that technology alone cannot overcome the inequalities that lead to disparities in health outcomes and that digital tools for health should be accessible to all, or risk exacerbating health inequalities as a result of a “digital divide”. The Report notes how the COVID-19 pandemic has both exposed the limitations of the current system and highlighted the capability of the UK National Health Service (“NHS”) for responding with flexibility and agility. The Report also makes several recommendations to the UK Government, including to set up “Demonstrators” to test the system-wide application of healthcare technologies.

The Report arrives ahead of the expected publication of the UK Government’s review into the use of health data for research and analysis (see our earlier blog here), and outlines the opportunities presented by technology in the context of public healthcare systems.

Continue Reading UK Government Publishes Report on Harnessing Technology For Benefit of the UK Healthcare System

The International Coalition of Medicines Regulatory Authorities (“ICMRA”) has published a report on the use of artificial intelligence (“AI”) to develop medicines (the “AI Report”) that provides a series of recommendations on how regulators and stakeholders can address challenges posed by AI.  The ICMRA notes that there are numerous opportunities to apply AI to medicines development, but that AI poses a number of challenges to existing regulatory frameworks.  The AI Report discusses these opportunities and challenges in detail based on several case studies, and provides a set of recommendations for implementation by the ICMRA and its member authorities, which includes the European Medicines Agency (the “EMA”), the USA’s Food and Drug Administration, and the World Health Organisation.  Based on the AI report, we expect to see an increased focus on adapting regulatory frameworks to deal with AI products going forwards both on an international and national level.

Continue Reading ICMRA Publishes Report and Recommendations on AI and Medicinal Products

Legislation that would amend California’s Confidentiality of Medical Information Act (“CMIA”) is working its way through California’s Senate and passed in the Senate Health Committee earlier this week.  The proposed bill passed in the state’s Assembly back in April.  Introduced by Democratic California Assemblymember Edwin Chau, who sits on the Privacy and Consumer Protection Committee, the proposed legislation (AB 1436) expands the definition of “provider of health care.”  Under the CMIA, providers of health care are subject to various obligations, including provisions that restrict the disclosure of medical information without a prior valid authorization, subject to certain exceptions. Continue Reading Proposed Bill Would Expand the Scope of the CMIA