Key Takeaways from FDA’s Framework for Real-World Evidence for Pharmaceuticals

On December 7, FDA published the much-anticipated “Framework for FDA’s Real-World Evidence Program” for drugs and biological products (the “Framework”).  In a statement announcing the Framework, Commissioner Gottlieb recognized the opportunities and challenges of using real-world data (“RWD”) and real-world evidence (“RWE”) to enhance regulatory decision-making and noted that leveraging this information is “a top strategic priority for the FDA.”  FDA opened a docket for public comments on the Framework through February 5, 2019.

The Framework focuses in particular on the use of RWE to support regulatory decisions about effectiveness.  The agency outlines three considerations that will guide its overall RWE Program and inform the agency’s assessment of individual drug applications.  The Framework also offers background on the agency’s previous use and current initiatives with respect to RWE and related topics, such as innovative clinical trial designs.  This blog post provides an overview of FDA’s proposal and highlights a few initial takeaways noted by Covington’s Digital Health team.

Continue Reading

NICE adopts evidence standards for the development and assessment of digital health technologies (DHTs)

The UK’s National Institute for Health and Care Excellence (NICE) has recently published an evidence standards framework for DHTs (the Standards), available here.  It did so through a working group led by NHS England, but supported by representatives from Public Health England, MedCity and DigitalHealth.London.

The Standards cover DHTs, such as apps, programs and software – both standalone or combined with other products like medical devices or diagnostic tests – intended for use within the UK’s health and care system.  They seek to address some of the challenges faced by both companies developing DHTs and those within the UK healthcare system that commission and deploy these new technologies.  Both sides needed guidance on the criteria and evidence to demonstrate and assess the performance of DHTs and to measure their cost impact, so that all stakeholders assess these new technologies consistently.

The Standards classify DHTs in three tiers by function. The lowest tier 1 comprises DHTs with no measurable patient outcomes but that provide services to the health and social care system.  Tier 2 comprises DHTs that provide information, resources or activities about a condition or general health and lifestyle.  Tier 2 also includes DHTs that perform simple monitoring of general health using fitness wearables and simple symptom-measuring devices and DHTs that allow two-way communication.

The third tier is split into tier 3a, which includes DHTs intended to facilitate preventative behaviour change to address public health issues like smoking, alcohol, sexual health, eating, sleeping and exercise.  It also covers DHTs that allow people to self-manage a condition.  Tier 3b includes DHTs that guide treatment, e.g., that perform calculations that impact treatment diagnosis or care, and DHTs that diagnose conditions, including those involved in active monitoring of a specified condition.

For each tier, the Standards provide guidance on the evidence required to demonstrate effectiveness or performance.  Obviously, the lower the tier, the lower the evidentiary burden, required to demonstrate performance, reliability and accuracy.  In all cases the Standards set out the “minimum evidence standard” and a “best practice standard.”  At tier 1, “a plausible mode of action that is viewed as useful and relevant” by those in the relevant field may suffice as the minimum evidence required. At tier 3b, the best practice standard is “high-quality randomized controlled study or studies done in a setting relevant to the UK health and social care system, comparing the DHT with a relevant comparator and demonstrating consistent benefit including in clinical outcomes to the target population…

From an economic impact perspective, NICE offers some guidance based on its current experiences of digital health offerings and its experience in evaluating other medical technologies, such as devices and diagnostics.  Again, NICE uses a tier-based approach, but one based on whether the DHT presents a low or high financial risk to a payer or commissioner.  For low financial risk DHTs, a simple budget impact analysis may suffice. For high-risk, publicly funded DHTs, an estimated incremental cost-effectiveness ratio (ICER) or some other formal health economic assessment may be necessary.

NICE and the DHT working group intends to release further educational, case study and other supporting resources in early 2019.

IoT Update: The UK Government’s Response to Centre for Data Ethics and Innovation Consultation

On 20 November 2018, the UK government published its response (the “Response”) to the June 2018 consultation (the “Consultation”) regarding the proposed new Centre for Data Ethics and Innovation (“DEI”). First announced in the UK Chancellor’s Autumn 2017 Budget, the DEI will identify measures needed to strengthen the way data and AI are used and regulated, advising on addressing potential gaps in regulation and outlining best practices in the area. The DEI is described as being the first of its kind globally, and represents an opportunity for the UK to take the lead the debate on how data is regulated. Continue Reading

Significant FDA Digital Health Policy Development for Prescription Drug Sponsors

As previewed by Commissioner Gottlieb several months ago (see our earlier post here), FDA published a notice in the Federal Register on November 20, 2018, to propose a new framework for “prescription drug-use-related software.” The Agency defines this digital health category widely as software disseminated by a prescription drug sponsor for use with the sponsor’s prescription drug(s). Last spring, the Commissioner stated that FDA would be seeking input “on how to support the development of digital health tools that are included as part of approved drugs.”  The goal in establishing the framework, Gottlieb stated, would be “to develop an efficient pathway for the review and approval of digital health tools as part of drug review, so that these tools reach their full potential to help us treat illness and disease, and encourage synergies between software and therapeutics that meet FDA’s gold standard for safety and effectiveness.”

This policy development is significant, not only because it is one of CDER’s first policy statements on digital health associated with pharmaceuticals (see a few of our earlier posts about pharma-related digital health here and here), but also because it implicates a broad range of information that could be made available by prescription drug sponsors through software used with their products. We encourage prescription drug sponsors with any interest in providing digital health solutions, including through collaborations, to review the Federal Register notice and consider submitting comments to FDA.

Here are a few key takeaways from FDA’s notice:

  • Under the proposed framework, software with the same drug-related functionalities will be subject to different regulatory approaches by FDA, depending on the developer of the software. FDA will apply the proposed framework to prescription drug-user-related software developed by or on behalf of pharmaceutical manufacturers, and a different approach to drug-related software developed “independently” by third-party software developers and other entities that are not prescription drug sponsors.
  • It is unclear from the notice how the proposed framework, including the evidentiary standards described in the Federal Register notice, will align with other FDA initiatives such as the use of real-world evidence for drug development and the pre-certification program (see our earlier post here).
  • An important question for prescription drug sponsors in particular is whether the proposed framework will encourage continued digital health innovation, including through collaborations, or whether FDA’s proposal will create challenges that may discourage advances in this area.

Continue Reading

UK Government publishes new policy paper outlining vision for digitizing health care and becoming a global leader in healthtech

On 17 October, the UK Government’s Department of Health and Social Care (DHSC) published a policy paper entitled “The future of healthcare: our vision for digital, data and technology in health and care” (the Policy Paper). The Policy Paper outlines the DHSC’s vision to use technology across the health and care system, from “getting the basics right”, to the UK’s “chance to lead the world on healthtech”, and “ultimate objective [of] the provision of better care and improved health outcomes for people in England”.

The DHSC acknowledges that there are “many real challenges” including the presence of legacy technology and commercial arrangements, complex organizational and delivery structures, risk-averse culture, limited resources to invest, and a critical need to build and maintain public trust.

To achieve its objectives, the DHSC has set out  in the Policy Paper four ‘guiding principles’ to operate by:

  1. User need – including designing services around different user needs (whether the public, clinicians or other staff) to help more people get the right intended outcome and cost less by reducing resources required for resolving issues.
  2. Privacy and security – such as ensuring the digital architecture of the healthcare system is underpinned by “clear”, “commonly understood” data and cyber security standards, guidance and frameworks, to be mandated across the NHS, “secure by default” and based on the General Data Protection Regulation (GDPR).
  3. Interoperability and openness – to help address current “poor” interoperability via, for instance, open data and technology standards in adherence with clinical data standards: the DHSC’s intention is that that anyone writing code for the NHS’s use will know these standards before they start, and technology can be used to provide more granular detail to help fight diseases and treat illnesses.
  4. Inclusion – to account for users with different physical, mental health, social, cultural and learning needs, low digital literacy or limited accessibility, and the intention that those able to benefit from digital services also may help free up accessibility to resources for those who often have great health needs, not able to use digital services.

Building on the guiding principles, the Policy Paper sets out a series of accompanying “architectural principles”, and four key priorities: infrastructure, digital services, innovation, and skills and culture.

Infrastructure

This Policy Paper priority builds particularly upon the first three principles outlined above. For instance, with respect to patients’ data, the Policy Paper concedes that the ability to share records between different care levels (e.g. hospitals, general practitioners, and pharmacies) is inconsistent. The Policy Paper also notes that often contracts in place for current systems do not adequately specify the standards of interoperability, usability and continual improvement required. 2017’s disruptive “WannaCry” cyber-attack (now acknowledged by the DHSC citing a National Audit Office report, to have actually been “relatively unsophisticated”) aptly illustrates the importance of data safe guarding and cyber security standards.

To address these concerns, and in order to “put in place the right infrastructure” and “buy the best technology”, the DHSC outlines some of the steps it is taking to build upon the existing safeguards in legislation and security standards, such as via the “Initial code of conduct for data-driven health care and technology” (the Code, previously discussed here), and the draft “NHS digital, data and technology standards framework” (the Framework) published alongside the Policy Paper. The DHSC also refers to the need for “quick, efficient procurement processes, small and short contracts, and clear documentation…[of] datasets and systems” and to avoid “building our own versions of … commodity services” (e.g. email clients and laptops).

Digital Services

The Policy Paper outlines some of the public-facing digital services already provided (e.g.  “NHS.uk” and the “NHS apps library”) and in development (e.g. “NHS Login”). The Policy Paper notes that digital services are also needed for staff across the health and care sector to prevent “wast[ing] vital time logging on to systems, or transcribing clinical data by hand or over the phone”. Services built, bought or commissioned “should start with user needs” and in instances where user needs are unique and industry may not necessarily obtain the economies of scale they need to invest, the DHSC wants to be empowered to build its own digital services in accordance with the government’s Digital Service Standard.

Innovation

The DHSC intends to put in place a framework allowing “researchers, innovators and technology companies to thrive, quickly access support and guidance, and develop products that meet user needs”, and  “support the uptake and adoption of the best of those services”. To create a healthtech “ecosystem”, the Policy Paper outlines the DHSC’s intention to work alongside experts to put in place standards (addressing evidence, privacy, cyber security and access to data), communicate user needs, support access to finance, encourage NHS/Industry collaboration, and improve the procurement process (e.g. reducing the burdens for small companies trying to sell to the NHS, and building on the government’s “G-Cloud framework” on its digital marketplace). The focus will be to “simplify the institutional landscape for support for healthtech”, remove barriers to market entry and encourage innovation.

The DHSC also intends to introduce a ‘healthtech regulatory sandbox’ to “test, iterate and de-risk the most promising innovations”, working alongside the Information Commissioner’s Office (ICO), National Data Guardian, National Institute for Health and Care Excellence (NICE), and other regulators.

The Policy Paper also highlights the potential of Artificial Intelligence (AI) to improve diagnosis and care, and the need to enforce the high standards of good practice for the development of these emerging technologies (such as via continued development of the Code).

Skills and culture

The Policy Paper specifies a need across the health and social care system to both recruit and retain specialist professionals (such as data scientists and analytics personnel) who are skilled and well-resourced to make best use of data, while continuing to develop the skills of clinicians and staff already working in health and care services. The DHSC intends for leaders at every level to ensure their staff are trained to use data and technology in their work, and that all health and care organizations have board-level understanding of how data and technology drive their services and strategies.

There will also be a new ‘Healthtech Advisory Board’, comprising of technology experts, clinicians and academics. The board will be used as an “ideas hub” and report directly to the Secretary of State for Health and Social Care.

 

The UK Government is currently seeking feedback on the Policy Paper and draft Framework (a questionnaire on the Policy Paper is available here).

AI Update: Medical Software and Preemption

In light of the rapidly expanding field of medical software technology, and its recognition that traditional approval mechanisms for hardware-based medical devices may not be well suited to regulating such technology, FDA is piloting a new, streamlined regulatory approach for digital health technologies. The initiative, currently a “working model” and known as the Software Precertification Program, is meant to encourage the development of health and medical software, including potentially software using artificial intelligence.

As currently envisioned, the Precertification Program creates a voluntary, organization-based approach to FDA review of digital health software. FDA will pre-certify organizations as having a “culture of quality” based on FDA’s review of the software developer’s R&D and monitoring systems. Under the working model, pre-certified organizations could submit less information in premarket submissions to FDA than currently required or may qualify for an exemption from premarket review by FDA.

Although it is unknown what specific metrics will be assessed in FDA’s review of organizations in the Precertification Program, the agency has asserted that it will seek to measure Product Quality, Patient Safety, Clinical Responsibility, Cybersecurity Responsibility, and Proactive Culture. Each of these elements will be evaluated in an evidence-based manner and FDA will make a determination regarding certification. Certification status will also be subject to monitoring and revision based on real-world performance data.

FDA’s intent to certify or pre-clear organizations rather than individual products on these safety and effectiveness elements opens a new potential arena for product liability litigation surrounding medical devices. In particular, medical devices are currently governed by an express preemption scheme under which federal law preempts certain state laws that are “different from, or in addition to, any requirement” of federal law. Under that standard, certain lawsuits concerning the safety of a medical device may be preempted, including (1) state-law claims premised on an allegation of fraud on the FDA, and (2) state-law claims involving devices that require pre-market approval, except to the extent those claims simply argue for design or warning requirements that “parallel” federal mandates.

To the extent that lawsuits alleging injury from medical device software (e.g., misdiagnosis) are brought against software developers, resolution of those tort claims will almost invariably involve evaluation by finders of fact of the very elements that FDA intends to examine and pre-certify:  whether the software developer has developed, tested, and maintained the software in a fashion that will provide safe and effective patient care. Such suits may, therefore, seek to impose under state law requirements for a particular product that are “different from, or in addition to,” the requirements that FDA has imposed on the development organization as a whole in the pre-certification process. And although courts have not yet considered the applicability of organizational requirements versus product-level requirements in this context, imposing tort liability on software developers who have met FDA’s requirements and are compliant with ongoing oversight programs may disrupt the federal regulatory scheme in the same way that tort lawsuits regarding premarket approved medical devices would. The Supreme Court has previously recognized that such disruption is impermissible.

The outcome of this legal issue will likely depend in part on the methods by which FDA implements the Precertification Program — which are yet to be determined — and on the specificity of its evaluation of individual organizations. Nevertheless, developers should be aware that compliance with the Precertification Program, if and when it is implemented, may have benefits not only in the regulatory setting but also in future litigation down the road.

EMA publishes “A Common Data Model for Europe? – Why? Which? How?” Workshop Report

On 8 October, the European Medicines Agency (EMA) published a report (available here) setting out the progress it has made towards applying a common data model (CDM) in Europe. The EMA defines a CDM as “a mechanism by which raw data are standardized to a common structure, format and terminology independently from any particular study in order to allow a combined analysis across several databases/datasets”. The report follows an EMA-hosted workshop in December 2017 to examine the opportunities and challenges of developing a CDM.

The report acknowledges that the use of ‘Real World Data’ (RWD) (data relating to patient health status or delivery of health care data that is routinely collected from sources other than clinical trials) has become an increasingly common source of evidence to support drug development and regulatory decision making for human medical use in Europe. However, Europe currently has no pan-European data network, despite the wealth of data generated through various national healthcare systems that provide access for all. Many multi-database studies currently performed are typically slow and still allow for substantial variability in the conduct of studies. Further, there are a growing number of innovative products that no longer align with customary drug development pathways. This may create uncertainty in their data packages required for authorization, and subsequent tension between facilitating earlier access for patients with limited treatment options against the requirement for proactive robust pharmacovigilance of medicines for wider clinical use across the product life cycle (the existing EMA Patient Registry Initiative addresses this need in part). Continue Reading

China Expands Regulations on e-Healthcare Issues

China continues to advance policy supporting e-healthcare services and resources.  On September 14, 2018, National Health Commission (“NHC”) and the National Administration of Traditional Chinese Medicine (“NATCM”) publicly released three new rules on internet based medical services and telemedicine.  These rules cover the areas of e-diagnosis (“e-Diagnostic Rules”), internet-based hospitals (“e-Hospital Rules”) and telemedicine services (“Telemedicine Service Standard”) (collectively “e-Healthcare Rules”).[1]

Although the government issued a draft of these rules in 2017, the final e-Healthcare Rules appear to have been prompted by the Opinion on Improving the Development of “e-healthcare” Industry (“Opinion”) issued by China’s chief executive branch, the State Council on April 25, 2018.  That Opinion requires enhancement and improvement of e-health services (including the application of artificial intelligence in the diagnostic process).

This blog entry focuses on key features of the e-Healthcare Rules.

Continue Reading

ICO consults on privacy “regulatory sandbox”

Designing data-driven products and services in compliance with privacy requirements can be a challenging process.  Technological innovation enables novel uses of personal data, and companies designing new data-driven products must navigate new, untested, and sometimes unclear requirements of privacy laws, including the General Data Protection Regulation (GDPR).  These challenges are often particularly acute for companies providing products and services leveraging artificial intelligence technologies, or operating with sensitive personal data, such as digital health products and services.

Recognising some of the above challenges, the Information Commissioner’s Office (ICO) has commenced a consultation on establishing a “regulatory sandbox”.  The first stage is a survey to gather market views on how such a regulatory sandbox may work (Survey).  Interested organisations have until 12 October to reply.

The key feature of the regulatory sandbox is to allow companies to test ideas, services and business models without risk of enforcement and in a manner that facilitates greater engagement between industry and the ICO as new products and services are being developed.

The regulatory sandbox model has been deployed in other areas, particularly in the financial services sector (see here), including by the Financial Conduct Authority in the UK (see here).

Potential benefits of the regulatory sandbox include reducing regulatory uncertainty, enabling more products to be brought to market, and reducing the time of doing so, while ensuring appropriate protections are in place (see the FCA’s report on its regulatory sandbox here for the impact it has had on the financial services sector, including lessons learned).

The ICO indicated earlier this year that it intends to launch the regulatory sandbox in 2019 and will focus on AI applications (see here).

Further details on the scope of the Survey are summarised below.

Continue Reading

UK Government publishes “Initial code of conduct for data-driven health and care technology” for consultation

On 5 September, in response to the opportunities presented by data-driven innovations, apps, clinician decision support tools, electronic health care records and advances in technology such as artificial intelligence, the UK Government published a draft “Initial code of conduct for data-driven health and care technology” (Code) for consultation.  The Code is designed to be supplementary to the Data Ethics Framework, published by the Department for Digital, Culture, Media and Sport on 30 August, which guides appropriate data use in the public sector.  The Code demonstrates a willingness of the UK Government to support data sharing to take advantage of new technologies to improve outcomes for patients and accelerate medical breakthroughs, while balancing key privacy principles enshrined in the GDPR and emerging issues such as the validation and monitoring of algorithm-based technologies.  For parties considering data-driven digital health projects, the Code provides a framework to help conceptualise a commercial strategy before engaging with legal teams.

The Code contains:

  • a set of ten principles for safe and effective digital innovations; and
  • five commitments from Government to ensure the health and care system is ready and able to adopt new technologies at scale,

each of which are listed further below.

While the full text of the Code will be of interest to all those operating in the digital health space, the following points are of particular note:

  • the UK Government recognises the “immense promise” that data sharing has for improving the NHS and social care system as well as for developing new treatments and medical breakthroughs;
  • the UK Government is committed to the safe use of data to improve outcomes of patients;
  • the Code intends to provide the basis for the health and care system and suppliers of digital technology to enter into commercial terms in which the benefits of the partnerships between technology companies and health and care providers are shared fairly (see further below); and
  • given the need of artificial intelligence for large datasets to function, two key challenges arise: (i) these datasets must be defined and structured in accordance with interoperable standards, and (ii) from an ethical and legal standpoint, people must be able to trust that data is used appropriately, safely and securely as the benefits of data sharing rely upon public confidence in the appropriate and effective use of data.

The Code provides sets out a number of factors consider before engaging with legal teams to help define a commercial strategy for data-driven digital health project.  These factors include: considering the scope of the project, term, value, compliance obligations and responsibilities, IP, liability and risk allocation, transparency, management of potential bias in algorithms, the ability of the NHS to add value, and defining the respective roles of the parties (which will require thinking beyond traditional research collaboration models).

Considering how value is created and realised is a key aspect of any data-driven digital health project, the Code identifies a number of potential options: simple royalties, reduced payments for commercial products, equity shares in business, improved datasets – but there is also no simple of single answer.  Members of Covington’s digital health group have advised on numerous data-driven collaborations in the healthcare sector.  Covington recently advised UK healthcare technology company Sensyne Health plc on pioneering strategic research and data processing agreements with three NHS Trust partners. Financial returns generated by Sensyne Health are shared with its NHS Trust partners via equity ownership in Sensyne Health and a share of royalties (further details are available here).

The UK Government also intends to conduct a formal review of the regulatory framework and assessing the commercial models used in technology partnerships in order to address issues such as bias, transparency, liability and accountability.

The UK Government is currently consulting on the Code (a questionnaire on the Code is available here) and intends to publish a final version of the Code in December.

Continue Reading

LexBlog