Digital Health Checkup (Part Three): Key Questions About AI, Data Privacy, and Cybersecurity

In the third installment of our series, Covington’s global cross-practice Digital Health team considers some additional key questions about Artificial Intelligence (AI), data privacy, and cybersecurity that companies across the life sciences and technology sectors should be asking to address the regulatory and commercial pieces of the complex digital health puzzle.

AI, Data Privacy, and Cybersecurity

1. Which data privacy and security rules apply?
There currently is not a specific law or regulation governing the collection, use, or disclosure of data for AI or the cybersecurity of AI technologies. As a result, digital health companies must assess how existing privacy and security rules will be interpreted and applied in the context of AI.

The applicable laws and regulations governing data privacy and security depend on a variety of criteria, including where you are located and where you are offering the AI technology.

Here are a few regional considerations for AI in the U.S. and data privacy and cybersecurity in the EU and China:

United States
Because large datasets of information typically are necessary to train and test AI technologies, digital health companies that are developing or utilizing AI should consider whether individuals receive adequate notice and are provided appropriate choices over how their information is collected, used, and shared for such purposes. For example, a person might have different expectations about how their information is being collected and used depending on whether they are communicating with a digital health AI assistant provided by a hospital, pharmaceutical company, or running shoe manufacturer. Consequently, providers of such technologies should consider clearly and prominently explaining who is operating the assistant and the operator’s information practices.

Depending on whether and to what extent you have a business relationship with or obtain information from a healthcare provider or other covered entity in order to develop or implement your AI, you may need to comply with the more specific privacy and data security requirements contained in HIPPA and state medical privacy laws in California and Texas.

Similarly, the collection and use of genetic information, biometric identifiers and information (based, for example, on facial recognition or scans, fingerprints, or voiceprints) trigger a patchwork of other federal and state laws.

The United States also regulates the security of connected products and the networks and systems on which they rely. The FTC historically has been the primary enforcement agency responsible for ensuring the “reasonableness” of product, system, network, and data security under Section 5 of the FTC Act. The FDA also has published pre- and post-market guidance on cybersecurity expectations with respect to connected medical devices. Both the FTC and the FDA recognize that responsibility for ensuring consumers against cyber threats applies to the entire product lifecycle—from initial design through vulnerability assessments, patching, and end-of-life considerations.

European Union
If you have a presence in the EU, offer services or goods there, or monitor the behavior of individuals there, you may be subject to the new EU General Data Protection Regulation (“GDPR”; see our checklist on this topic)—a complex law backed by fines of up to 4 percent of global annual turnover (or €20,000,000), obligations to appoint local representatives and data protection officers, etc. It contains strict limits and conditions on the collection, use, and sharing of health data, genetic data, and biometric data, and requires extensive internal policies, procedures, and even the building of “data portability” features allowing individuals to export their data to rival services.

The EU’s “cookie rule” also prohibits most storage of data to, or reading of data from, Internet-connected devices without prior informed consent. Finally, many EU countries also have confidentiality rules that further restrict the collection and use of patient data, plus detailed health cybersecurity rules, such as a French law that requires all services hosting patient data to have first obtained Ministry of Health accreditation.

China
Healthcare data is also considered sensitive in China, and will soon be subject to more stringent requirements under the Information Security Technology – Personal Information Security Specification, in addition to existing data protection and cybersecurity obligations imposed by China’s Cybersecurity Law (see our recent post on this topic).

China also has regulations governing medical records and population health information, such as the Medical Institution Medical Records Administrative Rules and the Administrative Measures for Population Health Information.

Best Practice: Identify the jurisdictions that you operate in or offer your services, and those that present the highest risk to your company. Then assess what data you collect and the purposes for which you use it to identify which specific laws and regulations apply.

2. How do you ensure that you have the necessary rights to collect, use, and disclose data in connection with your AI technologies?
When collecting information directly from users of the AI, you should be transparent about the types of information you collect, how you use it, whether and to whom you disclose it, and how you protect it. It is critical that these disclosures be accurate and include all material information.

When developing, training, and testing AI technologies, companies also look to existing data sources. If the company is using personal data that it previously collected, it should consider whether individuals had sufficient notice that the information would be used for purposes of developing and improving new kinds of digital health solutions. When obtaining this information from third-party sources, the company should consider contractual representations and warranties that ensure all necessary notices, consents, rights, and permissions were provided and obtained to permit the company to use the data as contemplated in the agreement.

In some cases, it also might be appropriate to provide users choice over how their information is collected, used, and shared. In the EU, for example, the GDPR outlaws consent statements that are buried in small print: for digital health purposes, consent will need to be clear, granular, and specifically opted into in order to be valid. In the EU, regulators also are starting to hold recipients liable for inadequate due diligence—merely obtaining contractual assurances from data sources may not be enough.

Best Practice: Notice typically is provided through a privacy policy, but the interactive nature of AI technologies mean that innovative just-in-time disclosures and privacy choices might be possible.

3. What are the fairness and ethical considerations for AI and data analytics in digital health solutions?
To maximize the potential of artificial intelligence and big data analytics, it is important to ensure that the data sets that are used to train AI algorithms and engage in big data analytics are reliable, representative, and fair.

Example: Some diseases disproportionately impact specific populations. If the data sets used to train the AI underlying your digital health offerings are not representative of these populations, the AI might not be effective. It also is critical that the data sets underlying your AI and data analytics are secured against unauthorized access or misuse.

In its report on “big data,” the FTC cautions companies to consider whether data sets:

  • are representative;
  • whether data models account for biases;
  • whether predictions based on big data are accurate; and
  • whether reliance on big data raises other ethical or fairness concerns.

Best Practice: Some companies are forming internal committees to ensure that their use of AI and data analytics is ethical and fair.

The EU also has detailed privacy rules impacting big data and AI. For instance, it grants all individuals a right to human review of fully automated decisions based on analysis of their data—and in many cases prohibits the basing of such decisions on sensitive data, such as their health data, ethnicity, political opinions, genetics, etc. It also outlaws any disclosure or secondary use of data originally collected for a different purpose, unless certain conditions are met, including that it meets the conditions to be considered “compatible” with the original uses (e.g., the new use must be within the individuals’ reasonable expectations).

Of note, if the digital health solution is potentially regulated by the FDA or an equivalent regulatory body, there may be additional pre-market and post-market considerations (e.g., validation of clinical recommendations using AI, adverse event reporting; see our earlier checkup on this topic).

Pharmaceutical Digital Health Innovators Take Note: FDA Public Hearing on an Innovative Approach to Devices Referencing Drugs

On November 16, 2017, the Food and Drug Administration (“FDA” or the “Agency”) will hold a public hearing on a proposed approach for sponsors seeking to market devices referencing drugs (“DRDs”) when the drug sponsor does not wish to collaborate with the sponsor of the device. FDA will accept comments to the docket until January 15, 2018. Continue Reading

Three Questions You Need to Ask When Negotiating Digital Health Deals

According to a distinguished panel of lawyers from MSD and Covington & Burling, companies involved in Digital Health deals need to ask themselves the following questions:

  • What data is required to develop and deliver the Digital Health solution, and does your company have sufficient expertise in-house to analyze the data?
  • What happens if your technology vendor becomes unable or unwilling to support or further develop software used in your Digital Health solution?
  • How do you structure a contract to develop and deliver a Digital Health service when the ultimate composition of the service, the customer base, and reimbursement model are all uncertain at the outset?

David Boyko, Division Counsel for MSD’s Healthcare Services and Solutions, and Nigel Howard, Daniel Pavin, and David Wildman of Covington addressed these key issues in an October 10, 2017 webinar on “Commercial and IT issues in Digital Health Deals.” This is the first of a series of webinars Covington is offering to help companies navigate the laws, regulations, and policies that govern the evolving Digital Health sector. These webinars are aimed at:

  • Legal and business teams involved in structuring and negotiating arrangements in the digital health space.
  • Legal and business teams with a background in “traditional” pharma-biotech collaborations who are looking to move into the digital health space.

If you would like to view a recording of this one hour webinar, please contact Jordyn Pedersen at jpedersen@cov.com.

Top Tips and Traps for Cyber Insurance Buyers

Although the National Cybersecurity Awareness Month of October has come to a close, it is not too late for corporate counsel and risk managers to be thinking about cyber-risk insurance — an increasingly essential tool in the enterprise risk management toolkit. But a prospective policyholder purchasing cyber insurance for the first time may be hard put to understand what coverage the insurer is selling and whether that coverage is a proper fit for its own risk profile. With little standardization among cyber policies’ wordings, confusing labels for their covered perils, and little interpretive guidance from case law to date, a cyber insurance buyer trying to evaluate a new proposed policy may hardly know where to focus first.

After pursuing coverage for historically major cyber breaches and analyzing scores of cyber insurance forms over the past 15 years, we suggest the following issues as a starting point for any cyber policy review: Continue Reading

CHMP Adopts Guideline on Genomic Sampling and Management of Genomic Data

On 14 September 2017, the Committee for Human Medicinal Products (“CHMP”) of the European Medicines Agency adopted ICH Guideline E18 (the “Guideline”) on genomic sampling and the management of genomic data.  The Guideline takes effect on 28 February 2018.

The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (“ICH”) developed the Guideline in acknowledgement of the growing awareness of, and interest in, genomic data generated from clinical studies.  The ICH suggested that the absence of a harmonized guideline made it more difficult to conduct genomic research consistently in global studies.  The fact that the CHMP has adopted the Guideline means that EU guidance on this subject is now aligned with the ICH standard.

The Guideline provides general principles for the collection and handling of genomic samples and management of genomic data.  It also affirms broader principles, such as the need for informed consent and the protection of subjects’ privacy etc.  The Guideline applies to both interventional and non-interventional clinical studies, irrespective of when the genomic research is carried out and whether it was envisaged in the study protocol.  The ICH/CHMP intend the Guideline to be interpreted in accordance with the law and policies in each jurisdiction where genomic research takes place. Continue Reading

Digital Health Checkup (Part Two): Key Commercial Questions When Contracting for Digital Health Solutions

Digital Health

In the second of a three-part series, Covington’s global cross-practice Digital Health team considers some additional key questions that companies across the life sciences, technology, and communications industries should be asking as they seek to fit together the regulatory and commercial pieces of the complex digital health puzzle.

Key Commercial Questions When
Contracting for Digital Health Solutions

1. Will you own or have rights to use the data that is collected and generated, and any insights, models, and algorithms that are developed?
If, as part of your digital health business model, you are partnering with a provider of big data analytics services (for example to develop therapeutic models to then incorporate into an app), you should ensure that you, your business partners, and any other third parties who are necessary for the implementation of that model, are permitted to use the output data to the extent needed. This requires careful attention to the data terms in your contract with the provider as well as appropriate due diligence of the terms under which the input data were obtained, which may limit downstream use of that output data.

The data outputs from data analytics conducted in digital health projects may represent new “insights.” For example, these insights could relate to the effectiveness of different treatments, comparative outcomes achieved with different delivery models, or predictive models for diagnosing, treating, and delivering care. Securing ownership of these insights and intellectual property rights in these insights can be difficult. First, under various legal systems, it may not be possible for the insights to be “owned” as such, and patent, copyright, or trade secret protection may not be available or viable. Secondly, there may be competing ownership interests among the collaboration partners. For example, if your data scientists discover an insight using your own proprietary algorithms, but those algorithms were applied to patient data and rely on advanced analytics tools provided by a service provider’s data processing platform, should the owner of the insight be you, the source of the data (e.g., a hospital), or the service provider?

In addition, competition authorities, particularly in Europe, are increasingly focusing on the circumstances in which data, including output data, can confer an anti-competitive advantage. Recent cases and statements from certain European competition authorities suggest that there may be a risk that entities controlling output data that (a) cannot be replicated (or obtained from another source), (b) is necessary for the development of new products, and (c) will not quickly become outdated, may be required to provide access to third parties developing such new products.(1)

2. Do you have commitments from your suppliers to provide functions at service levels suitable for the health sector and designed to maintain patient/user trust?
If you are responsible for delivering a digital health service to customers, it is critical for that service to be provided in accordance with service levels that are suitable for the health sector and that are designed to build and maintain patient/user trust.

Service components such as availability of user support, call response times, “uptimes”/permissible downtimes, and problem resolution time frames will all typically be governed by service level arrangements between you and your customers, and between you and your suppliers. If one or more components of the digital health service are supplied to you, or on your behalf, by third party sub-contractors, then you will want to ensure that you have in place with those sub-contractors appropriately robust service level arrangements. These will need to be sufficient to ensure that you can provide to your customers the level of service for the overall digital health service that they expect and that will maintain your competitiveness in the market.

Prior to contracting, you should carry out due diligence of your potential suppliers to determine whether they are in turn dependent on other suppliers (for example a cloud storage platform provider), and if so, whether the service levels at each link in the chain are adequate having regard to customer expectations.

3. When you are structuring strategic collaborations to develop and deliver a digital health service, have you taken into account uncertainties as to the ultimate composition of the service, its customers, and its reimbursement model?
If you are entering into strategic, long-term collaborations to develop and market a digital health service, a significant challenge is that it is often unclear at the outset what the precise route to market will be, including who, in principle, will be the customer. It is often similarly unclear what all the elements of the resulting service will ultimately be, and so it is not possible to determine the cost of providing the service until a later stage in the collaboration. Further, the reimbursement model may initially be uncertain and your collaboration partners might desire to conduct one or more pilot phases with healthcare providers in order to demonstrate the service’s value proposition and refine a reimbursement model.

As a result, you must consider whether you wish to agree to financial terms at the outset or at a later stage in the course of the collaboration. Agreeing to financial terms at the outset has the benefit of certainty, but there is a risk that those terms become inappropriate or uneconomical in the event that the underlying basis for the financial terms changes. On the other hand, deferring agreement of the financial terms to a point at which there is more clarity ensures flexibility, but it will be essential for you and your collaboration partners to work through, and document, the consequences of failing to reach agreement at that later stage. For example, you will likely have invested considerably in the collaboration prior to that point, and may have a third party such as a healthcare system looking to move from a pilot phase to full commercial implementation.

  1. See, for example, Facebook/WhatsApp Case N.7217; Keynote speech, G. Loriot, 7 June 2017, GCR Live 6th Annual Telecoms, Media & Technology conference.

The UK’s Life Sciences Industrial Strategy: Digital Health Implications

On August 30, the UK government published a report by Professor Sir John Bell of Oxford University providing a number of recommendations to government to ensure the long term success of the life sciences sector in the UK (Life Sciences Industry Strategy).  This blog post summarises the key recommendations and observations made from a digital health perspective.  As the Life Sciences Industrial Strategy notes, “[d]ata in the healthcare system provides crucial opportunities to fundamentally change the way health services are provided and developing digital tools, such as AI, are going to form an increasingly important segment of the life sciences sector”.

The Life Sciences Industrial Strategy makes a number of recommendations and ‘reinforcing actions’ of significance from a digital health perspective, including:

  • the establishment of a ‘Health Advanced Research Programme’ to undertake large research infrastructure projects and high risk ‘moonshot programmes’ to create entirely new industries (a core principle of these programmes will be the NHS’ provision of secure and appropriate access to cradle-to-grave data sets and the piloting of technologies);
  • the use of digitalisation and AI to transform pathology and imaging;
  • the UK should work with industry and regulators to (i) establish a working group to evaluate the use of digital health care data and health systems, (ii) evaluate the safety and efficacy of new interventions, and (iii) help ICH modernise its GCP regulations;
  • the National Data Guardian’s and Care Quality Commission’s data safeguards and standards should be implemented alongside a wider national conversation with the public to enable a true understanding of data usage and how such data is vital to improving health, care and services through research;
  • NHS Digital and NHS England should set out clear and consistent approaches to data and interoperability standards and requirements for data access agreements;
  • access to currently available national datasets should be accelerated by streamlining legal and ethical approvals;
  • ePrescribing should be mandatory for hospitals;
  • creating a new regulatory and commercial framework to capture the value of algorithms being generated using NHS data (this may include the development of ‘sandbox’ access to deidentified or synthetic data from providers such as NHS Digital, where innovators could safely develop algorithms and trial new regulatory approaches for all product types);
  • the creation of 2-5 digital innovation hubs providing data across regions of three to five million people should be set up as part of a national approach to building towards full population coverage, to enable research to be done on meaningful data sets; and
  • creating an apprenticeship scheme focused on data sciences and skills across the life sciences sector.

The Life Sciences Industrial Strategy contains many other recommendations of interest to those in the life sciences sector, including in relation to taxation, manufacturing support and the impact of Brexit of the movement of skilled people and regulatory approvals.  The UK government is reviewing the Life Sciences Industrial Strategy and its recommendations.

Digital Health Checkup: Key Questions Market Players Should Be Asking (Part One)

Digital Health

In the first of a three-part series, Covington’s global cross-practice Digital Health team answers key questions that companies across the life sciences, technology, and communications industries should be asking as they seek to fit together the regulatory and commercial pieces of the complex digital health puzzle.

Key Regulatory Questions About Digital Health Solutions

1. What are your digital health solution’s intended uses?
Understand whether the components of the solution (or in some cases, the sum of the parts) are regulated by one or more regulatory authorities and, if so, the associated regulatory classification and requirements.

In various jurisdictions, including the U.S., EU, and China, a digital health solution could potentially be regulated as a medical device or a drug-device combination product, or it could be a consumer product not regulated under medical product authorities. Much would depend on the solution’s intended use and functionality, and the claims made by the product’s manufacturer. U.S., EU, and Chinese laws acknowledge that standalone software can be a medical device. A digital health software solution could be regulated as a medical device if is intended by the manufacturer to have a medical purpose or otherwise affect patient care. If the solution is intended to “e-enable” a drug or otherwise intended for use with a drug, it could create a drug-device combination product. In the U.S. and China, such drug-device combinations may be regulated under the drug marketing application or under a separate device marketing application. In the EU, such drug-device combinations are regulated as medicines. Alternatively, the solution could be a consumer product that is not subject to medical product regulation if it is not intended for use with a drug and is positioned as a “lifestyle/general wellness” tool, rather than a tool with a medical purpose.

2. What kind of claims can you make about your digital health solution?
Also establish what level of substantiation is required for those claims. If you are a pharmaceutical company, consider whether your or your collaborator’s digital health solution may impact the marketing of your drug(s) (e.g., would the digital solution be considered by FDA, EMA, DOJ, FTC, China’s CFDA or SAIC and/or another regulatory authority to be drug advertising, promotion, or labeling; does testing it require an investigational application; do you need to file a supplemental drug application or variation to a marketing authorization).

Permitted claims will depend on the regulatory classification of your solution. For example, e-enabling and other digital health components of approved/authorized medicines can create combined drug-device combination products, which will need to comply with U.S. and EU drug laws. This will impact permitted advertising and promotion and will often require specific product labeling. It could also require a supplement or variation to an existing marketing authorization.

In the U.S., if your solution is a medical device, its advertising and labeling will be subject to FDA and/or FTC regulation. Both agencies have authority to take action against false or misleading promotion, including claims that are not supported by appropriate clinical data. There are no harmonized EU medical device advertising rules. You will need to consider at an EU member state level whether there are any restricted audiences before promoting your device. In China, any therapeutic claims would be subject to restrictions under China’s drug and/or device regulations and its Advertisement Law, and CFDA must pre-approve all advertisements and medical information websites.

3. Are your warnings and disclosures tailored to your intended audience and use(s), not merely boilerplate?
Understand whether they reasonably warn about possible adverse health consequences to patients. Even in the absence of regulatory labeling requirements, you may have duties to your customers under tort law or general consumer protection legislation.

The adequacy of warnings will depend on the risk and classification of the solution and the purpose of the disclosure. Different considerations apply depending on whether the disclosure is intended to provide legally mandated information or to warn against unintended uses or functions. For example, in certain instances a manufacturer may accept that its solution is a regulated product and seek to include appropriate warnings in associated materials. In other cases, the solution could be unregulated and warnings and disclosures could be applied as protection against unintended use of the product.

4. What other regulations apply to your digital health solution?
Depending on the nature of the digital health solution, several other laws and regulations may apply. For instance, if the solution is offered through health care providers or health plans or if it interacts with the electronic health record systems of health care providers, compliance with the HIPAA privacy and other data privacy laws, security and breach notification rules may be required.

In the U.S., federal laws intended to protect against fraud and abuse, such as the Anti-Kickback Statute and the Stark physician referral statute, may also be implicated. In addition, consideration should be given to analogous state laws and to state laws governing the practice of medicine.

In the EU, the digital health solution may also be a regulated health service. Many jurisdictions will require that entities or organizations delivering a health service have some kind of register or permit from a relevant regulator. This would include, for example, the Care Quality Commission in the UK, which will register an entity as a health service provider only once it has carried out an audit and subject it to periodic re-inspections. Moreover, if that health service provider wishes to provide services specifically to a national or regional health service provider, it may need to hold other permits or meet certain additional standards.

Additional laws and regulations may also apply in China. For example, similar to the EU, in China health services are subject to strict regulation. These services must typically be managed through an institution with a health care institution license, and advertisements for health services must be submitted by that institution to the provincial-level health authorities for pre-approval. Health information websites must also meet specific regulatory and pre-approval requirements. China’s increasing body of regulation on cybersecurity, Internet information, and health privacy may also impose requirements on the flow of personal health information to and from a medical device or consumer product.

Bar to Data Breach Litigation May Be Dropping; Implications for Digital Health Technologies

At the beginning of August, the D.C. Circuit found that the fact that a data breach has occurred and individual consumer information has been lost may constitute sufficient injury to confer standing on those individual victims at the pleading stage–irrespective of whether any stolen information has been misused. Specifically, Attias, et al. v. CareFirst, Inc., et al., No. 16-7108, 2017 WL 3254941 (D.C. Cir. Aug. 1, 2017) ruled that a class of health insurance policyholders could maintain their suit against CareFirst, due to a cyberattack on the insurance provider’s servers. The court found that “a heightened risk of future identity theft” was enough to confer standing. Id. at *4 n.2. The court based its decision on the fact of the breach and the associated heightened risk rather than on whether any of the policy holders’ identities had actually been stolen. Relying on a prior decision by the Seventh Circuit, the court observed, “Why else would hackers break into a . . . database and steal consumers’ private information?” Id. at *6 (quoting Remijas v. Neiman Marcus Grp., 794 F.3d 688, 693 (7th Cir. 2015)).

Despite the clarity with which the D.C. Circuit reached its decision, the circuits have split over what exactly an individual whose data has been stolen must show to establish standing in federal court. Article III requires a plaintiff to demonstrate an “injury in fact” that is “fairly traceable” to the defendant’s challenged conduct and is “likely to be redressed by a favorable judicial decision.” Spokeo, Inc. v. Robins, 136 S. Ct. 1540, 1540 (2016) (quoting Lujan v. Defenders of Wildlife, 504 U.S. 555, 560-61). Some circuits have ruled that the theft of data, without more, does not constitute such an injury. See, e.g., Beck et al. v. McDonald et al., 848 F.3d 262 (4th Cir. 2017). The CareFirst court joined a growing list of circuits ruling to the contrary.

CareFirst also serves as an independent reminder that the theft of medical data can have significant ramifications for victims. Armed with information such as insurance identifiers, a fraudster may “impersonate[] the victim and obtain medical services” in the victim’s name, leading to potentially inaccurate medical records, improper health care, depletion of insurance, ineligibility for health or life insurance, and disqualification from jobs. CareFirst, 2017 WL 3254941, at *6.

Implications for Digital Health Technologies:

CareFirst also highlights the importance of managing data security risks in designing digital health technologies, both because of the potential ease with which a prospective plaintiff may have standing to bring suit and because of the sensitive nature of medical information.  Digital health companies should take steps to manage this risk whether they are building their digital solutions themselves or working with business partners and service providers.  Very often working with business partners and service providers is the quickest and most efficient way to market with a digital solution, but this does mean relying on the data security practices of a third party.  In view of this, appropriate due diligence and contractual terms with respect to data security are essential in digital health agreements.  In addition, the processes and procedures governing a data security incident and any associated plaintiffs’ claims should be addressed in the agreement.  The healthcare industry has been a particular target for ransomware attacks, so contractual commitments with regard to back up and restoration of end user data is important.  The promise of digital health is partly premised on companies being methodical and careful in their commercial contracting and business partner/service provider management.

AG Opinion on Software Medical Devices

On 28 June 2017, Advocate General Sanchez-Bordona (AG) presented his opinion in case C-329/16 Syndicat national de l’industrie des technologies médicales and Philips France following a request for preliminary ruling from the Conseil d’État (France) to the Court of Justice of the European Union (CJEU) concerning the laws governing the classification of software medical devices.

The AG’s opinion is not binding on the CJEU, but it provides useful guidance on the application of the EU medical devices Directive 93/42/EEC (the MDD) to software programs.  Importantly, it confirms the position set out in the Commission’s MEDDEV 2.1/6 guidance that software which merely stores and archives data is not a medical device; the software must perform an action on data (i.e., it must interpret and/or change the data).

EU national courts use the preliminary ruling procedure if they are in doubt about the interpretation or validity of an EU law. In such cases, they may ask the CJEU for advice. The Advocate Generals provide the CJEU with public and impartial opinions to assist the Court in its decision making. The Advocate Generals’ opinions are advisory and non-binding, but they are nonetheless influential.  In the majority of cases the CJEU follows the Advocate General.

Background

Philips France (Philips) manufactures and places on the EU market a software program called Intellispace Critical Care and Anesthesia (ICCA), which is used by physicians to provide information necessary for the proper administration of medicines for the purposes of resuscitation and anaesthesia.  The software highlights possible contraindications, interactions with other medicines and excessive dosing.  Philips classified the ICCA as a medical device under the MDD and the product bears a CE mark confirming that the software complies with the applicable requirements of the MDD.

Under French law, software programs intended to support medical prescriptions are subject to national certification requirements.  The French Government’s position is that the ICCA must comply with this national certification requirement. Further, it does not consider the ICCA to be a medical device within the meaning of Article 1(2)(a) of the MDD because the function of assisting with prescriptions does not fall under any of the defined purposes within the definition of a medical device.

Philips claimed that the national certification requirement should not apply as it amounted to a restriction on import, contrary to EU law, and that the French Government was in breach of Article 4(1) of the MDD, which provides that Member States must not restrict the placing on the market or the putting into service of medical devices bearing the CE mark within their territory.

The French Conseil d’État referred to the CJEU a request for a preliminary ruling on the question of whether software equivalent to the ICCA satisfies the definition of a medical device under the MDD.

AG Opinion

The AG opinion suggests that Philips had correctly classified the ICCA as a medical device.  It highlights that since the ICCA bears a CE mark and is freely marketed in 17 EU Member States, it benefits from a presumption of conformity with the MDD.  It was a matter for the French Government to rebut this presumption, and it had failed to do so.

In reaching his conclusion, the AG highlighted a number of points, including:

  • In order to qualify as a medical device, software must have a function beyond collection and archiving of data (i.e., it must have more than a purely administrative function). Rather, it must modify or interpret the data.  The ICCA software includes an engine that allows healthcare professionals to calculate the prescription of medications and the duration of treatments.  In light of such functions, the AG considers it difficult to maintain that the ICCA does not have a diagnostic or therapeutic purpose within the scope of the definition of a medical device. The ICCA is not a software program that is limited to administrative functions, but rather software that helps determine the proper prescription for the patient.  It is therefore a medical device as it has the aim of “preventing, controlling, treating or alleviating a disease”.
  • The fact that the ICCA does not act by itself in or on the human body does not preclude it from classification as a medical device. Contributing to the principal action of correcting the human body through the taking of medicinal products is sufficient.

The above conclusion endorses the position set out in the Commission MEDDEV 2.1/6 guidance on qualification and classification of standalone software, which states:

…if the software does not perform an action on data, or performs an action limited to storage, archival, communication, ‘simple search’ or lossless compression (i.e. using a compression procedure that allows the exact reconstruction of the original data) it is not a medical device.

LexBlog