NMPA Releases Draft Good Manufacturing Practice Appendix on Standalone Software

On January 3, 2019, the National Medical Products Administration (“NMPA”) published a draft standalone software appendix of medical device good manufacturing practice (“Draft Standalone Software GMP” or “Draft Appendix”) for public comment (available here).  Comments are due on January 30, 2019.

China revised its medical device GMP in 2014, which apply to all classes of devices regardless of whether they are imported or made in China.  Subsequently, NMPA added various appendices (fulu) to articulate special requirements for certain types of devices, including sterile, implantable, and in vitro diagnostic devices.    The Draft Appendix sets out proposed special requirements for software that falls under the definition of medical device.

In China, the definition of a medical device covers software that either itself constitutes a device (i.e., standalone software) or is an accessory/component of a device (i.e., component software).  The Draft Standalone Software GMP expressly applies to standalone software and it states that it applies, “by reference,” (mutatis mutandis) to component software.  If finalized, the Draft Standalone Software GMP would be effective on an undetermined date in 2020.

The Draft Appendix is a relatively simple document with four main sections:

  • scope and general principles of the Draft Appendix ;
  • special requirements for various aspects of the manufacturing and post-market processes (see below);
  • definitions of key terms; and
  • miscellaneous provisions.

Key features of the Draft Standalone Software GMP include the following:

Continue Reading

Are Wearables Medical Devices Requiring a CE-Mark in the EU?

Wearable watches that help consumers obtain a better understanding of their eating patterns; wearable clothes that send signals to treating physicians; smart watches: they are but a few examples of the increasingly available and increasingly sophisticated “wearables” on the EU market. These technologies are an integrated part of many people’s lives, and in some cases allow healthcare professionals to follow-up on the condition or habits of their patients, often in real-time. How do manufacturers determine what wearables qualify as medical devices? How do they assess whether their devices need a CE-mark? Must they differentiate between the actual “wearable” and the hardware or software that accompanies them? In this short contribution, we briefly analyze some of these questions. The article first examines what “wearables” are, and when they qualify as a medical device under current and future EU rules. It then addresses the relevance of the applicability of EU medical devices rules to these products. The application of these rules is often complex and highly fact-specific.

Continue Reading

EU Working Group Publishes Draft Guidance on AI Ethics

On 18 December 2018, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published new draft guidance on “AI Ethics” (the “guidance”).  The AI HLEG is a European Commission-backed working group made up of representatives from industry, academia and NGOs, and was formed as part of the Commission’s ongoing work to develop EU policy responses to the development, challenges and new opportunities posed by AI technologies.  Stakeholders are invited to comment on the draft through the European AI Alliance before it is finalized in March 2019.

The guidance recognizes the potential benefits of AI technologies for Europe, but also stresses that AI must be developed and implemented with a “human-centric approach” that results in “Trustworthy AI”. The guidance then explains in detail the concept of “Trustworthy AI” and the issues stakeholders should navigate in order to achieve it.  A more detailed summary of the guidance is set out below.

This guidance is not binding, but it is likely to influence EU policymakers as they consider whether and how to legislate in the AI space going forwards. AI HLEG also envisages that the final version of the guidance in March 2019 will include a mechanism to allow stakeholders to voluntarily endorse its principles.  The guidance also states that the AI HLEG will consider making legislative recommendations in its separate deliverable on “Policy & Investment Recommendations,” due May 2019.

Continue Reading

EESC supports the digital transformation of EU healthcare sector, emphasising data access and ownership as ‘crucial’ to the process

On 6 December 2018, the European Economic and Social Committee (EESC) published an opinion (“Opinion”) addressing the European Commission’s recent Communication on the digital transformation of health and care in the Digital Single Market (issued 25 April 2018).

The EESC is an advisory body of the European Union (“EU”) comprising representatives of workers’ and employers’ organisations and other interest groups.  It issues opinions to the European Commission, the Council of the EU, and the European Parliament. Although not legally binding, these opinions may serve to inform the legislative process.

This EESC Opinion voices strong support for the Commission’s vision to transform the healthcare sector across the EU through digitalisation and technological innovation. It lists a variety of benefits that the EESC believes will accrue from this modernisation effort – including more time for patient care, greater health literacy, new digital tools, and the improved interoperability of systems. But hand-in-hand with this endorsement, the EESC also raised several issues that it believes lawmakers should address to ensure that people remain “at the centre of care” throughout the process.

First, it noted that the right to access one’s health data and the ability to control the onward sharing of that data should be at the core of digital health transformation. Here, the Opinion cited the EU’s General Data Protection Regulation (“GDPR”) and said its rules must be adhered to whenever designing and implementing new technologies in this sector.

Furthermore, the Opinion presents a discussion on the ownership of health data. It offers a list of questions to consider whenever examining this concept, namely:

  • who owns the data?
  • who has the right to use the data?
  • under what conditions can other service providers use the data?
  • can the user freely use the data?

Notably, the EESC maintains that the original health data of each user “must be regarded as an original product generated by [that user]” which merits the protections of intellectual property law. Here, the Opinion calls for a “right to free copying” of data generated on digital health platforms, which can then be reused and re-aggregated in other services and algorithms as the individual may see fit. The Opinion argues that such a right would not only enable people to take back their digital identity, but also create a more robust freedom of choice in the marketplace of digital health platforms that would spur competitive innovation. It will certainly be interesting to see how (and whether) this concept of one’s health data as an “original product” will evolve over time.

The Opinion closes by considering the challenges and opportunities that await on the horizon of digital health transformation. These include not only the great promise of enabling technologies (such as 5G) and the possibility to rebalance the “socioeconomic asymmetries” of a data-sharing economy, but also the ethical issues of data mining and automated decision-making, as well as the ever-present risks of cybersecurity threats.

Key Takeaways from FDA’s Framework for Real-World Evidence for Pharmaceuticals

On December 7, FDA published the much-anticipated “Framework for FDA’s Real-World Evidence Program” for drugs and biological products (the “Framework”).  In a statement announcing the Framework, Commissioner Gottlieb recognized the opportunities and challenges of using real-world data (“RWD”) and real-world evidence (“RWE”) to enhance regulatory decision-making and noted that leveraging this information is “a top strategic priority for the FDA.”  FDA opened a docket for public comments on the Framework through February 5, 2019.

The Framework focuses in particular on the use of RWE to support regulatory decisions about effectiveness.  The agency outlines three considerations that will guide its overall RWE Program and inform the agency’s assessment of individual drug applications.  The Framework also offers background on the agency’s previous use and current initiatives with respect to RWE and related topics, such as innovative clinical trial designs.  This blog post provides an overview of FDA’s proposal and highlights a few initial takeaways noted by Covington’s Digital Health team.

Continue Reading

NICE adopts evidence standards for the development and assessment of digital health technologies (DHTs)

The UK’s National Institute for Health and Care Excellence (NICE) has recently published an evidence standards framework for DHTs (the Standards), available here.  It did so through a working group led by NHS England, but supported by representatives from Public Health England, MedCity and DigitalHealth.London.

The Standards cover DHTs, such as apps, programs and software – both standalone or combined with other products like medical devices or diagnostic tests – intended for use within the UK’s health and care system.  They seek to address some of the challenges faced by both companies developing DHTs and those within the UK healthcare system that commission and deploy these new technologies.  Both sides needed guidance on the criteria and evidence to demonstrate and assess the performance of DHTs and to measure their cost impact, so that all stakeholders assess these new technologies consistently.

The Standards classify DHTs in three tiers by function. The lowest tier 1 comprises DHTs with no measurable patient outcomes but that provide services to the health and social care system.  Tier 2 comprises DHTs that provide information, resources or activities about a condition or general health and lifestyle.  Tier 2 also includes DHTs that perform simple monitoring of general health using fitness wearables and simple symptom-measuring devices and DHTs that allow two-way communication.

The third tier is split into tier 3a, which includes DHTs intended to facilitate preventative behaviour change to address public health issues like smoking, alcohol, sexual health, eating, sleeping and exercise.  It also covers DHTs that allow people to self-manage a condition.  Tier 3b includes DHTs that guide treatment, e.g., that perform calculations that impact treatment diagnosis or care, and DHTs that diagnose conditions, including those involved in active monitoring of a specified condition.

For each tier, the Standards provide guidance on the evidence required to demonstrate effectiveness or performance.  Obviously, the lower the tier, the lower the evidentiary burden, required to demonstrate performance, reliability and accuracy.  In all cases the Standards set out the “minimum evidence standard” and a “best practice standard.”  At tier 1, “a plausible mode of action that is viewed as useful and relevant” by those in the relevant field may suffice as the minimum evidence required. At tier 3b, the best practice standard is “high-quality randomized controlled study or studies done in a setting relevant to the UK health and social care system, comparing the DHT with a relevant comparator and demonstrating consistent benefit including in clinical outcomes to the target population…

From an economic impact perspective, NICE offers some guidance based on its current experiences of digital health offerings and its experience in evaluating other medical technologies, such as devices and diagnostics.  Again, NICE uses a tier-based approach, but one based on whether the DHT presents a low or high financial risk to a payer or commissioner.  For low financial risk DHTs, a simple budget impact analysis may suffice. For high-risk, publicly funded DHTs, an estimated incremental cost-effectiveness ratio (ICER) or some other formal health economic assessment may be necessary.

NICE and the DHT working group intends to release further educational, case study and other supporting resources in early 2019.

IoT Update: The UK Government’s Response to Centre for Data Ethics and Innovation Consultation

On 20 November 2018, the UK government published its response (the “Response”) to the June 2018 consultation (the “Consultation”) regarding the proposed new Centre for Data Ethics and Innovation (“DEI”). First announced in the UK Chancellor’s Autumn 2017 Budget, the DEI will identify measures needed to strengthen the way data and AI are used and regulated, advising on addressing potential gaps in regulation and outlining best practices in the area. The DEI is described as being the first of its kind globally, and represents an opportunity for the UK to take the lead the debate on how data is regulated. Continue Reading

Significant FDA Digital Health Policy Development for Prescription Drug Sponsors

As previewed by Commissioner Gottlieb several months ago (see our earlier post here), FDA published a notice in the Federal Register on November 20, 2018, to propose a new framework for “prescription drug-use-related software.” The Agency defines this digital health category widely as software disseminated by a prescription drug sponsor for use with the sponsor’s prescription drug(s). Last spring, the Commissioner stated that FDA would be seeking input “on how to support the development of digital health tools that are included as part of approved drugs.”  The goal in establishing the framework, Gottlieb stated, would be “to develop an efficient pathway for the review and approval of digital health tools as part of drug review, so that these tools reach their full potential to help us treat illness and disease, and encourage synergies between software and therapeutics that meet FDA’s gold standard for safety and effectiveness.”

This policy development is significant, not only because it is one of CDER’s first policy statements on digital health associated with pharmaceuticals (see a few of our earlier posts about pharma-related digital health here and here), but also because it implicates a broad range of information that could be made available by prescription drug sponsors through software used with their products. We encourage prescription drug sponsors with any interest in providing digital health solutions, including through collaborations, to review the Federal Register notice and consider submitting comments to FDA.

Here are a few key takeaways from FDA’s notice:

  • Under the proposed framework, software with the same drug-related functionalities will be subject to different regulatory approaches by FDA, depending on the developer of the software. FDA will apply the proposed framework to prescription drug-user-related software developed by or on behalf of pharmaceutical manufacturers, and a different approach to drug-related software developed “independently” by third-party software developers and other entities that are not prescription drug sponsors.
  • It is unclear from the notice how the proposed framework, including the evidentiary standards described in the Federal Register notice, will align with other FDA initiatives such as the use of real-world evidence for drug development and the pre-certification program (see our earlier post here).
  • An important question for prescription drug sponsors in particular is whether the proposed framework will encourage continued digital health innovation, including through collaborations, or whether FDA’s proposal will create challenges that may discourage advances in this area.

Continue Reading

UK Government publishes new policy paper outlining vision for digitizing health care and becoming a global leader in healthtech

On 17 October, the UK Government’s Department of Health and Social Care (DHSC) published a policy paper entitled “The future of healthcare: our vision for digital, data and technology in health and care” (the Policy Paper). The Policy Paper outlines the DHSC’s vision to use technology across the health and care system, from “getting the basics right”, to the UK’s “chance to lead the world on healthtech”, and “ultimate objective [of] the provision of better care and improved health outcomes for people in England”.

The DHSC acknowledges that there are “many real challenges” including the presence of legacy technology and commercial arrangements, complex organizational and delivery structures, risk-averse culture, limited resources to invest, and a critical need to build and maintain public trust.

To achieve its objectives, the DHSC has set out  in the Policy Paper four ‘guiding principles’ to operate by:

  1. User need – including designing services around different user needs (whether the public, clinicians or other staff) to help more people get the right intended outcome and cost less by reducing resources required for resolving issues.
  2. Privacy and security – such as ensuring the digital architecture of the healthcare system is underpinned by “clear”, “commonly understood” data and cyber security standards, guidance and frameworks, to be mandated across the NHS, “secure by default” and based on the General Data Protection Regulation (GDPR).
  3. Interoperability and openness – to help address current “poor” interoperability via, for instance, open data and technology standards in adherence with clinical data standards: the DHSC’s intention is that that anyone writing code for the NHS’s use will know these standards before they start, and technology can be used to provide more granular detail to help fight diseases and treat illnesses.
  4. Inclusion – to account for users with different physical, mental health, social, cultural and learning needs, low digital literacy or limited accessibility, and the intention that those able to benefit from digital services also may help free up accessibility to resources for those who often have great health needs, not able to use digital services.

Building on the guiding principles, the Policy Paper sets out a series of accompanying “architectural principles”, and four key priorities: infrastructure, digital services, innovation, and skills and culture.

Infrastructure

This Policy Paper priority builds particularly upon the first three principles outlined above. For instance, with respect to patients’ data, the Policy Paper concedes that the ability to share records between different care levels (e.g. hospitals, general practitioners, and pharmacies) is inconsistent. The Policy Paper also notes that often contracts in place for current systems do not adequately specify the standards of interoperability, usability and continual improvement required. 2017’s disruptive “WannaCry” cyber-attack (now acknowledged by the DHSC citing a National Audit Office report, to have actually been “relatively unsophisticated”) aptly illustrates the importance of data safe guarding and cyber security standards.

To address these concerns, and in order to “put in place the right infrastructure” and “buy the best technology”, the DHSC outlines some of the steps it is taking to build upon the existing safeguards in legislation and security standards, such as via the “Initial code of conduct for data-driven health care and technology” (the Code, previously discussed here), and the draft “NHS digital, data and technology standards framework” (the Framework) published alongside the Policy Paper. The DHSC also refers to the need for “quick, efficient procurement processes, small and short contracts, and clear documentation…[of] datasets and systems” and to avoid “building our own versions of … commodity services” (e.g. email clients and laptops).

Digital Services

The Policy Paper outlines some of the public-facing digital services already provided (e.g.  “NHS.uk” and the “NHS apps library”) and in development (e.g. “NHS Login”). The Policy Paper notes that digital services are also needed for staff across the health and care sector to prevent “wast[ing] vital time logging on to systems, or transcribing clinical data by hand or over the phone”. Services built, bought or commissioned “should start with user needs” and in instances where user needs are unique and industry may not necessarily obtain the economies of scale they need to invest, the DHSC wants to be empowered to build its own digital services in accordance with the government’s Digital Service Standard.

Innovation

The DHSC intends to put in place a framework allowing “researchers, innovators and technology companies to thrive, quickly access support and guidance, and develop products that meet user needs”, and  “support the uptake and adoption of the best of those services”. To create a healthtech “ecosystem”, the Policy Paper outlines the DHSC’s intention to work alongside experts to put in place standards (addressing evidence, privacy, cyber security and access to data), communicate user needs, support access to finance, encourage NHS/Industry collaboration, and improve the procurement process (e.g. reducing the burdens for small companies trying to sell to the NHS, and building on the government’s “G-Cloud framework” on its digital marketplace). The focus will be to “simplify the institutional landscape for support for healthtech”, remove barriers to market entry and encourage innovation.

The DHSC also intends to introduce a ‘healthtech regulatory sandbox’ to “test, iterate and de-risk the most promising innovations”, working alongside the Information Commissioner’s Office (ICO), National Data Guardian, National Institute for Health and Care Excellence (NICE), and other regulators.

The Policy Paper also highlights the potential of Artificial Intelligence (AI) to improve diagnosis and care, and the need to enforce the high standards of good practice for the development of these emerging technologies (such as via continued development of the Code).

Skills and culture

The Policy Paper specifies a need across the health and social care system to both recruit and retain specialist professionals (such as data scientists and analytics personnel) who are skilled and well-resourced to make best use of data, while continuing to develop the skills of clinicians and staff already working in health and care services. The DHSC intends for leaders at every level to ensure their staff are trained to use data and technology in their work, and that all health and care organizations have board-level understanding of how data and technology drive their services and strategies.

There will also be a new ‘Healthtech Advisory Board’, comprising of technology experts, clinicians and academics. The board will be used as an “ideas hub” and report directly to the Secretary of State for Health and Social Care.

 

The UK Government is currently seeking feedback on the Policy Paper and draft Framework (a questionnaire on the Policy Paper is available here).

AI Update: Medical Software and Preemption

In light of the rapidly expanding field of medical software technology, and its recognition that traditional approval mechanisms for hardware-based medical devices may not be well suited to regulating such technology, FDA is piloting a new, streamlined regulatory approach for digital health technologies. The initiative, currently a “working model” and known as the Software Precertification Program, is meant to encourage the development of health and medical software, including potentially software using artificial intelligence.

As currently envisioned, the Precertification Program creates a voluntary, organization-based approach to FDA review of digital health software. FDA will pre-certify organizations as having a “culture of quality” based on FDA’s review of the software developer’s R&D and monitoring systems. Under the working model, pre-certified organizations could submit less information in premarket submissions to FDA than currently required or may qualify for an exemption from premarket review by FDA.

Although it is unknown what specific metrics will be assessed in FDA’s review of organizations in the Precertification Program, the agency has asserted that it will seek to measure Product Quality, Patient Safety, Clinical Responsibility, Cybersecurity Responsibility, and Proactive Culture. Each of these elements will be evaluated in an evidence-based manner and FDA will make a determination regarding certification. Certification status will also be subject to monitoring and revision based on real-world performance data.

FDA’s intent to certify or pre-clear organizations rather than individual products on these safety and effectiveness elements opens a new potential arena for product liability litigation surrounding medical devices. In particular, medical devices are currently governed by an express preemption scheme under which federal law preempts certain state laws that are “different from, or in addition to, any requirement” of federal law. Under that standard, certain lawsuits concerning the safety of a medical device may be preempted, including (1) state-law claims premised on an allegation of fraud on the FDA, and (2) state-law claims involving devices that require pre-market approval, except to the extent those claims simply argue for design or warning requirements that “parallel” federal mandates.

To the extent that lawsuits alleging injury from medical device software (e.g., misdiagnosis) are brought against software developers, resolution of those tort claims will almost invariably involve evaluation by finders of fact of the very elements that FDA intends to examine and pre-certify:  whether the software developer has developed, tested, and maintained the software in a fashion that will provide safe and effective patient care. Such suits may, therefore, seek to impose under state law requirements for a particular product that are “different from, or in addition to,” the requirements that FDA has imposed on the development organization as a whole in the pre-certification process. And although courts have not yet considered the applicability of organizational requirements versus product-level requirements in this context, imposing tort liability on software developers who have met FDA’s requirements and are compliant with ongoing oversight programs may disrupt the federal regulatory scheme in the same way that tort lawsuits regarding premarket approved medical devices would. The Supreme Court has previously recognized that such disruption is impermissible.

The outcome of this legal issue will likely depend in part on the methods by which FDA implements the Precertification Program — which are yet to be determined — and on the specificity of its evaluation of individual organizations. Nevertheless, developers should be aware that compliance with the Precertification Program, if and when it is implemented, may have benefits not only in the regulatory setting but also in future litigation down the road.

LexBlog