On 20 November 2018, the UK government published its response (the “Response”) to the June 2018 consultation (the “Consultation”) regarding the proposed new Centre for Data Ethics and Innovation (“DEI”). First announced in the UK Chancellor’s Autumn 2017 Budget, the DEI will identify measures needed to strengthen the way data and AI are used and regulated, advising on addressing potential gaps in regulation and outlining best practices in the area. The DEI is described as being the first of its kind globally, and represents an opportunity for the UK to take the lead the debate on how data is regulated. Continue Reading
As previewed by Commissioner Gottlieb several months ago (see our earlier post here), FDA published a notice in the Federal Register on November 20, 2018, to propose a new framework for “prescription drug-use-related software.” The Agency defines this digital health category widely as software disseminated by a prescription drug sponsor for use with the sponsor’s prescription drug(s). Last spring, the Commissioner stated that FDA would be seeking input “on how to support the development of digital health tools that are included as part of approved drugs.” The goal in establishing the framework, Gottlieb stated, would be “to develop an efficient pathway for the review and approval of digital health tools as part of drug review, so that these tools reach their full potential to help us treat illness and disease, and encourage synergies between software and therapeutics that meet FDA’s gold standard for safety and effectiveness.”
This policy development is significant, not only because it is one of CDER’s first policy statements on digital health associated with pharmaceuticals (see a few of our earlier posts about pharma-related digital health here and here), but also because it implicates a broad range of information that could be made available by prescription drug sponsors through software used with their products. We encourage prescription drug sponsors with any interest in providing digital health solutions, including through collaborations, to review the Federal Register notice and consider submitting comments to FDA.
Here are a few key takeaways from FDA’s notice:
- Under the proposed framework, software with the same drug-related functionalities will be subject to different regulatory approaches by FDA, depending on the developer of the software. FDA will apply the proposed framework to prescription drug-user-related software developed by or on behalf of pharmaceutical manufacturers, and a different approach to drug-related software developed “independently” by third-party software developers and other entities that are not prescription drug sponsors.
- It is unclear from the notice how the proposed framework, including the evidentiary standards described in the Federal Register notice, will align with other FDA initiatives such as the use of real-world evidence for drug development and the pre-certification program (see our earlier post here).
- An important question for prescription drug sponsors in particular is whether the proposed framework will encourage continued digital health innovation, including through collaborations, or whether FDA’s proposal will create challenges that may discourage advances in this area.
On 17 October, the UK Government’s Department of Health and Social Care (DHSC) published a policy paper entitled “The future of healthcare: our vision for digital, data and technology in health and care” (the Policy Paper). The Policy Paper outlines the DHSC’s vision to use technology across the health and care system, from “getting the basics right”, to the UK’s “chance to lead the world on healthtech”, and “ultimate objective [of] the provision of better care and improved health outcomes for people in England”.
The DHSC acknowledges that there are “many real challenges” including the presence of legacy technology and commercial arrangements, complex organizational and delivery structures, risk-averse culture, limited resources to invest, and a critical need to build and maintain public trust.
To achieve its objectives, the DHSC has set out in the Policy Paper four ‘guiding principles’ to operate by:
- User need – including designing services around different user needs (whether the public, clinicians or other staff) to help more people get the right intended outcome and cost less by reducing resources required for resolving issues.
- Privacy and security – such as ensuring the digital architecture of the healthcare system is underpinned by “clear”, “commonly understood” data and cyber security standards, guidance and frameworks, to be mandated across the NHS, “secure by default” and based on the General Data Protection Regulation (GDPR).
- Interoperability and openness – to help address current “poor” interoperability via, for instance, open data and technology standards in adherence with clinical data standards: the DHSC’s intention is that that anyone writing code for the NHS’s use will know these standards before they start, and technology can be used to provide more granular detail to help fight diseases and treat illnesses.
- Inclusion – to account for users with different physical, mental health, social, cultural and learning needs, low digital literacy or limited accessibility, and the intention that those able to benefit from digital services also may help free up accessibility to resources for those who often have great health needs, not able to use digital services.
Building on the guiding principles, the Policy Paper sets out a series of accompanying “architectural principles”, and four key priorities: infrastructure, digital services, innovation, and skills and culture.
This Policy Paper priority builds particularly upon the first three principles outlined above. For instance, with respect to patients’ data, the Policy Paper concedes that the ability to share records between different care levels (e.g. hospitals, general practitioners, and pharmacies) is inconsistent. The Policy Paper also notes that often contracts in place for current systems do not adequately specify the standards of interoperability, usability and continual improvement required. 2017’s disruptive “WannaCry” cyber-attack (now acknowledged by the DHSC citing a National Audit Office report, to have actually been “relatively unsophisticated”) aptly illustrates the importance of data safe guarding and cyber security standards.
To address these concerns, and in order to “put in place the right infrastructure” and “buy the best technology”, the DHSC outlines some of the steps it is taking to build upon the existing safeguards in legislation and security standards, such as via the “Initial code of conduct for data-driven health care and technology” (the Code, previously discussed here), and the draft “NHS digital, data and technology standards framework” (the Framework) published alongside the Policy Paper. The DHSC also refers to the need for “quick, efficient procurement processes, small and short contracts, and clear documentation…[of] datasets and systems” and to avoid “building our own versions of … commodity services” (e.g. email clients and laptops).
The Policy Paper outlines some of the public-facing digital services already provided (e.g. “NHS.uk” and the “NHS apps library”) and in development (e.g. “NHS Login”). The Policy Paper notes that digital services are also needed for staff across the health and care sector to prevent “wast[ing] vital time logging on to systems, or transcribing clinical data by hand or over the phone”. Services built, bought or commissioned “should start with user needs” and in instances where user needs are unique and industry may not necessarily obtain the economies of scale they need to invest, the DHSC wants to be empowered to build its own digital services in accordance with the government’s Digital Service Standard.
The DHSC intends to put in place a framework allowing “researchers, innovators and technology companies to thrive, quickly access support and guidance, and develop products that meet user needs”, and “support the uptake and adoption of the best of those services”. To create a healthtech “ecosystem”, the Policy Paper outlines the DHSC’s intention to work alongside experts to put in place standards (addressing evidence, privacy, cyber security and access to data), communicate user needs, support access to finance, encourage NHS/Industry collaboration, and improve the procurement process (e.g. reducing the burdens for small companies trying to sell to the NHS, and building on the government’s “G-Cloud framework” on its digital marketplace). The focus will be to “simplify the institutional landscape for support for healthtech”, remove barriers to market entry and encourage innovation.
The DHSC also intends to introduce a ‘healthtech regulatory sandbox’ to “test, iterate and de-risk the most promising innovations”, working alongside the Information Commissioner’s Office (ICO), National Data Guardian, National Institute for Health and Care Excellence (NICE), and other regulators.
The Policy Paper also highlights the potential of Artificial Intelligence (AI) to improve diagnosis and care, and the need to enforce the high standards of good practice for the development of these emerging technologies (such as via continued development of the Code).
Skills and culture
The Policy Paper specifies a need across the health and social care system to both recruit and retain specialist professionals (such as data scientists and analytics personnel) who are skilled and well-resourced to make best use of data, while continuing to develop the skills of clinicians and staff already working in health and care services. The DHSC intends for leaders at every level to ensure their staff are trained to use data and technology in their work, and that all health and care organizations have board-level understanding of how data and technology drive their services and strategies.
There will also be a new ‘Healthtech Advisory Board’, comprising of technology experts, clinicians and academics. The board will be used as an “ideas hub” and report directly to the Secretary of State for Health and Social Care.
In light of the rapidly expanding field of medical software technology, and its recognition that traditional approval mechanisms for hardware-based medical devices may not be well suited to regulating such technology, FDA is piloting a new, streamlined regulatory approach for digital health technologies. The initiative, currently a “working model” and known as the Software Precertification Program, is meant to encourage the development of health and medical software, including potentially software using artificial intelligence.
As currently envisioned, the Precertification Program creates a voluntary, organization-based approach to FDA review of digital health software. FDA will pre-certify organizations as having a “culture of quality” based on FDA’s review of the software developer’s R&D and monitoring systems. Under the working model, pre-certified organizations could submit less information in premarket submissions to FDA than currently required or may qualify for an exemption from premarket review by FDA.
Although it is unknown what specific metrics will be assessed in FDA’s review of organizations in the Precertification Program, the agency has asserted that it will seek to measure Product Quality, Patient Safety, Clinical Responsibility, Cybersecurity Responsibility, and Proactive Culture. Each of these elements will be evaluated in an evidence-based manner and FDA will make a determination regarding certification. Certification status will also be subject to monitoring and revision based on real-world performance data.
FDA’s intent to certify or pre-clear organizations rather than individual products on these safety and effectiveness elements opens a new potential arena for product liability litigation surrounding medical devices. In particular, medical devices are currently governed by an express preemption scheme under which federal law preempts certain state laws that are “different from, or in addition to, any requirement” of federal law. Under that standard, certain lawsuits concerning the safety of a medical device may be preempted, including (1) state-law claims premised on an allegation of fraud on the FDA, and (2) state-law claims involving devices that require pre-market approval, except to the extent those claims simply argue for design or warning requirements that “parallel” federal mandates.
To the extent that lawsuits alleging injury from medical device software (e.g., misdiagnosis) are brought against software developers, resolution of those tort claims will almost invariably involve evaluation by finders of fact of the very elements that FDA intends to examine and pre-certify: whether the software developer has developed, tested, and maintained the software in a fashion that will provide safe and effective patient care. Such suits may, therefore, seek to impose under state law requirements for a particular product that are “different from, or in addition to,” the requirements that FDA has imposed on the development organization as a whole in the pre-certification process. And although courts have not yet considered the applicability of organizational requirements versus product-level requirements in this context, imposing tort liability on software developers who have met FDA’s requirements and are compliant with ongoing oversight programs may disrupt the federal regulatory scheme in the same way that tort lawsuits regarding premarket approved medical devices would. The Supreme Court has previously recognized that such disruption is impermissible.
The outcome of this legal issue will likely depend in part on the methods by which FDA implements the Precertification Program — which are yet to be determined — and on the specificity of its evaluation of individual organizations. Nevertheless, developers should be aware that compliance with the Precertification Program, if and when it is implemented, may have benefits not only in the regulatory setting but also in future litigation down the road.
On 8 October, the European Medicines Agency (EMA) published a report (available here) setting out the progress it has made towards applying a common data model (CDM) in Europe. The EMA defines a CDM as “a mechanism by which raw data are standardized to a common structure, format and terminology independently from any particular study in order to allow a combined analysis across several databases/datasets”. The report follows an EMA-hosted workshop in December 2017 to examine the opportunities and challenges of developing a CDM.
The report acknowledges that the use of ‘Real World Data’ (RWD) (data relating to patient health status or delivery of health care data that is routinely collected from sources other than clinical trials) has become an increasingly common source of evidence to support drug development and regulatory decision making for human medical use in Europe. However, Europe currently has no pan-European data network, despite the wealth of data generated through various national healthcare systems that provide access for all. Many multi-database studies currently performed are typically slow and still allow for substantial variability in the conduct of studies. Further, there are a growing number of innovative products that no longer align with customary drug development pathways. This may create uncertainty in their data packages required for authorization, and subsequent tension between facilitating earlier access for patients with limited treatment options against the requirement for proactive robust pharmacovigilance of medicines for wider clinical use across the product life cycle (the existing EMA Patient Registry Initiative addresses this need in part). Continue Reading
China continues to advance policy supporting e-healthcare services and resources. On September 14, 2018, National Health Commission (“NHC”) and the National Administration of Traditional Chinese Medicine (“NATCM”) publicly released three new rules on internet based medical services and telemedicine. These rules cover the areas of e-diagnosis (“e-Diagnostic Rules”), internet-based hospitals (“e-Hospital Rules”) and telemedicine services (“Telemedicine Service Standard”) (collectively “e-Healthcare Rules”).
Although the government issued a draft of these rules in 2017, the final e-Healthcare Rules appear to have been prompted by the Opinion on Improving the Development of “e-healthcare” Industry (“Opinion”) issued by China’s chief executive branch, the State Council on April 25, 2018. That Opinion requires enhancement and improvement of e-health services (including the application of artificial intelligence in the diagnostic process).
This blog entry focuses on key features of the e-Healthcare Rules.
Designing data-driven products and services in compliance with privacy requirements can be a challenging process. Technological innovation enables novel uses of personal data, and companies designing new data-driven products must navigate new, untested, and sometimes unclear requirements of privacy laws, including the General Data Protection Regulation (GDPR). These challenges are often particularly acute for companies providing products and services leveraging artificial intelligence technologies, or operating with sensitive personal data, such as digital health products and services.
Recognising some of the above challenges, the Information Commissioner’s Office (ICO) has commenced a consultation on establishing a “regulatory sandbox”. The first stage is a survey to gather market views on how such a regulatory sandbox may work (Survey). Interested organisations have until 12 October to reply.
The key feature of the regulatory sandbox is to allow companies to test ideas, services and business models without risk of enforcement and in a manner that facilitates greater engagement between industry and the ICO as new products and services are being developed.
Potential benefits of the regulatory sandbox include reducing regulatory uncertainty, enabling more products to be brought to market, and reducing the time of doing so, while ensuring appropriate protections are in place (see the FCA’s report on its regulatory sandbox here for the impact it has had on the financial services sector, including lessons learned).
The ICO indicated earlier this year that it intends to launch the regulatory sandbox in 2019 and will focus on AI applications (see here).
Further details on the scope of the Survey are summarised below.
On 5 September, in response to the opportunities presented by data-driven innovations, apps, clinician decision support tools, electronic health care records and advances in technology such as artificial intelligence, the UK Government published a draft “Initial code of conduct for data-driven health and care technology” (Code) for consultation. The Code is designed to be supplementary to the Data Ethics Framework, published by the Department for Digital, Culture, Media and Sport on 30 August, which guides appropriate data use in the public sector. The Code demonstrates a willingness of the UK Government to support data sharing to take advantage of new technologies to improve outcomes for patients and accelerate medical breakthroughs, while balancing key privacy principles enshrined in the GDPR and emerging issues such as the validation and monitoring of algorithm-based technologies. For parties considering data-driven digital health projects, the Code provides a framework to help conceptualise a commercial strategy before engaging with legal teams.
The Code contains:
- a set of ten principles for safe and effective digital innovations; and
- five commitments from Government to ensure the health and care system is ready and able to adopt new technologies at scale,
each of which are listed further below.
While the full text of the Code will be of interest to all those operating in the digital health space, the following points are of particular note:
- the UK Government recognises the “immense promise” that data sharing has for improving the NHS and social care system as well as for developing new treatments and medical breakthroughs;
- the UK Government is committed to the safe use of data to improve outcomes of patients;
- the Code intends to provide the basis for the health and care system and suppliers of digital technology to enter into commercial terms in which the benefits of the partnerships between technology companies and health and care providers are shared fairly (see further below); and
- given the need of artificial intelligence for large datasets to function, two key challenges arise: (i) these datasets must be defined and structured in accordance with interoperable standards, and (ii) from an ethical and legal standpoint, people must be able to trust that data is used appropriately, safely and securely as the benefits of data sharing rely upon public confidence in the appropriate and effective use of data.
The Code provides sets out a number of factors consider before engaging with legal teams to help define a commercial strategy for data-driven digital health project. These factors include: considering the scope of the project, term, value, compliance obligations and responsibilities, IP, liability and risk allocation, transparency, management of potential bias in algorithms, the ability of the NHS to add value, and defining the respective roles of the parties (which will require thinking beyond traditional research collaboration models).
Considering how value is created and realised is a key aspect of any data-driven digital health project, the Code identifies a number of potential options: simple royalties, reduced payments for commercial products, equity shares in business, improved datasets – but there is also no simple of single answer. Members of Covington’s digital health group have advised on numerous data-driven collaborations in the healthcare sector. Covington recently advised UK healthcare technology company Sensyne Health plc on pioneering strategic research and data processing agreements with three NHS Trust partners. Financial returns generated by Sensyne Health are shared with its NHS Trust partners via equity ownership in Sensyne Health and a share of royalties (further details are available here).
The UK Government also intends to conduct a formal review of the regulatory framework and assessing the commercial models used in technology partnerships in order to address issues such as bias, transparency, liability and accountability.
The UK Government is currently consulting on the Code (a questionnaire on the Code is available here) and intends to publish a final version of the Code in December.
The Centers for Medicare & Medicaid Services (CMS) recently announced that Medicare coverage policies would be revised “to support the use of [continuous glucose monitors] in conjunction with a smartphone, including the important data sharing function they provide for patients and their families.” In turn, the agency’s contractors, known as Medicare Administrative Contractors (MACs), modified their policies in part to recognize “the use of smart devices (watch, smartphone, tablet, laptop computer, etc.)” (see CMS and MAC announcements here and here). This recent shift is an important precedent for technologies that incorporate the use of electronic devices to display and share medical data, and may foreshadow flexibility in future federal policy development to support the important role smart devices are increasingly playing in communicating medical data.
By way of background, in January 2017, CMS issued a Ruling that therapeutic CGMs—i.e., those capable of monitoring blood glucose levels for making diabetes treatment decisions—may be covered by Medicare Part B as Durable Medical Equipment (DME). The Ruling also provided that, among other things, there must be a durable component capable of displaying the trending of the continuous glucose measurements. A continuous glucose monitor, or CGM, includes a dedicated receiver that tracks and displays glucose levels throughout the day. Several CGM systems currently on the market are capable of connecting to an individual’s smart device, which in turn allows patients to visualize their glucose measurements via app-based communication.
When the MACs issued a coverage policy to implement CMS’s Ruling, they limited the payment of supplies that are used in conjunction with CGMs relying on smart devices because the smart devices were not viewed as medical in nature given they were useful in the absence of an illness. After stakeholders voiced concerns about the restrictions, CMS revisited the issue to address concerns about the inability to share the displayed data with family members, physicians, and caregivers. In a revised Policy Article, the MACs address coverage for CGM systems that use a smart device to provide information, describing two scenarios where the CGM system would be covered:
- CGM supplies would be covered if the glucose data is displayed on the CGM receiver that meets the definition of DME, and is also transmitted to a smart device.
- Coverage of CGM supplies would be available in a situation where the beneficiary uses a CGM receiver on some days to review their glucose data, but also uses their smart device on other days. (If the beneficiary never uses the receiver that comes with the glucose monitor and qualifies as DME under the regulatory definition, then the CGM supply allowance is not covered by Medicare.)
Although the smart device is not considered DME, and use of supplies and accessories are covered only when the smart device is used in conjunction with a dedicated DME receiver, nevertheless, this policy acknowledges the practicalities of data sharing using smart devices. We continue to monitor this and other reimbursement developments relating to the use of smart devices.
Earlier this year, President Trump signed into law the Bipartisan Budget Act of 2018 (BBA), which incorporates provisions from the Creating High-Quality Results and Outcomes Necessary to Improve Chronic (CHRONIC) Care Act of 2017 and improves access to telehealth services in Medicare Advantage. Pub. L. No. 115-123. Among other provisions impacting Medicare Advantage Organizations (MAOs), the BBA authorizes MAOs to offer additional telehealth benefits as basic benefits beyond original Medicare (Part A and Part B) limitations. Id. at Div. E., Title III, Subtitle C, § 50323.