CMS Announces MyHealthEData Initiative to Promote Patient Access to Health Data

On March 6, 2018, CMS announced the MyHealthEData initiative, which aims to give patients easier access to and control over their medical records.

Announcing the initiative, CMS Administrator Seema Verma laid out a future where individuals will have access to their health data wherever they go and be able to share data with the push of a button, with easy access to their entire medical history from birth, including data from health visits, claims, and information gathered through wearable technology.

According to Administrator Verma’s speech and a CMS announcement, the MyHealthEData program is a government-wide initiative that includes the following components:

Continue Reading

Latest NIST Draft Report a Call to Action for Federal Agencies and Private Companies

Digital HealthInflection Point for IoT

In a relatively short amount of time, the adoption of the Internet of Things (IoT) and its applications— from smart cars to the myriad of interconnected sensors in the General Service Administration building reminiscent of HAL 9000 from 2001: A Space Odyssey— has rapidly proliferated, providing significant opportunities and benefits. However, the increased ubiquity of IoT comes with heightened risks to security, privacy and physical safety and without a standardized set of cybersecurity requirements, many IoT devices and systems are vulnerable to attack. Earlier this month, the National Institute of Standards and Technology (NIST) (through the Interagency International Cybersecurity Standardization Working Group (IICS WG)) released a draft report to help both federal agencies and private companies plan and develop cybersecurity standards in their use and production of IoT components, products, systems and services. The draft report stresses the importance of coordination across the private and public sectors in developing standards to bolster the security and resilience of IoT, provides a snapshot of current international cybersecurity standards, and offers recommendations for gap-filling.

Mind the Gap           

The draft report uses five market areas of IoT application (Connected Vehicles, Consumer IoT, Health IoT & Medical Devices, Smart Buildings and Smart Manufacturing) to provide a synopsis on the current state of play for international cybersecurity standards along the following core areas:

  • Cryptographic Techniques
  • Cyber Incident Management
  • Hardware Assurance
  • Identity and Access Management
  • Information Security Management Systems
  • IT System Security Evaluation
  • Network Security
  • Security Automation and Continuous Monitoring
  • Software Assurance
  • Supply Chain Risk Management
  • System Security Engineering

While there are at least some established standards in most of these core areas, a few areas currently lack standards (namely, IT System Security Evaluation, Network Security and System Security Engineering). Indeed, even where standards have been established, consistent implementation across the five market areas are either lagging or nonexistent. For example, although some Hardware Assurance standards exist for the Connected Vehicles and Health IoT market areas, implementation has been lagging, while the same standards have yet to be implemented in the Consumer IoT, Smart Building and Smart Manufacturing market areas. This inconsistency in standards and adoption is explained by the draft report as a function of the traditional prioritization of cybersecurity in networks. Typically, cybersecurity focuses on confidentiality, integrity, and availability (in that order), but when an organization develops standards for IoT technologies, it’s important to consider how the IoT components interact with the physical world as well as each other when prioritizing; accordingly, cybersecurity for an IoT device may be ordered differently depending on the use case.  For example, Hardware Assurance is likely the most important issue for a medical device such as a pacemaker while Identity and Access Management are likely paramount for Smart Buildings.

A New Standard of Care?

So why should private companies care about this draft report?  NIST is a part of the Department of Commerce and unlike other standards bodies that are dependent on licensing revenues for funding, NIST’s work is effectively in the public domain. Some NIST standards (such as FIPS) become requirements for federal agencies and their contactors, particularly in the absence of clearly identified alternatives (the Department of Defense, for example, imposes the security controls found in NIST publication 800-171 on its contractors). Therefore, suppliers and contractors to government agencies will often be required to evaluate themselves against NIST standards in the absence of industry accepted alternatives.

Further, to the extent that NIST finalizes this report and establishes that there are approved cybersecurity standards that are characterized as mature, manufacturers and users of IoT devices may face an argument that following those standards is a standard of care to which they must adhere.  In a typical common-law context, the standard of care is determined by asking what a reasonable and prudent person would do in the same circumstance.  To be imposed as a standard of care, however, the cybersecurity standard also must have reasonable acceptance in the relevant community and impose a specific duty on a person or company.  Though the NIST report does not yet represent such a standard, NIST’s view is persuasive to some sectors and available for companies without cost.  Companies working in the US may want to consider the positions in this report in their planning sequences, perhaps to leverage the final version as a self-assessment tool to identify gaps and/or to confirm that certain named standards are not relevant to their organizations.  Given that NIST is seeking feedback from the public, there is an opportunity for private companies to have meaningful input in the final version of this report.

The Clock is Ticking

At a time when the application of IoT is experiencing rapid growth across industries, NIST states that it hopes the report will inform and enable managers, policymakers, and Standards Developing Organizations as they seek to develop a holistic cybersecurity framework focused on security and resiliency. Although the benefits of IoT are significant, the draft report acknowledges that “the timely availability of international cybersecurity standards is a dynamic and critical component for the cybersecurity and resilience of all information and communications systems and supporting infrastructures.”  Failing to establish effective standards could have significant consequences on current products and on how future products are developed.

Public comments to the draft report are being accepted until April 18, 2018 and can be submitted to NIST at using the comment template available at

Key Takeaways from Covington’s Webinar about Digital Health Associated with Pharmaceuticals

On February 1, 2018, Covington’s Digital Health team hosted a webinar examining U.S. and EU regulatory issues for digital health associated with pharmaceuticals.  Here are some key takeaways from that webinar:

  • Neela Paykel from Proteus Digital Health, noted that “you need to think outside the box for how to engage, whether you’re a pharma company or a digital health company.  For pharmaceuticals, you have to understand that there’s more risk tolerance in the technology space.  For digital health companies, you have to understand healthcare regulation and appreciate all the regulations pharmaceutical companies are dealing with on a regular basis.”
  • Grant Castle from Covington’s London office described how “it’s tempting to think once you’ve understood the regulations, you can enter the market with a digital health product, but in many respects, that’s the start of the challenge.  Systems for pricing and reimbursement of digital health offerings have yet to evolve fully.  It can also be challenging for a pharmaceutical company to offer digital health products where regulations might prohibit pharmaceutical companies from providing incentives to healthcare professionals for its products.  Such issues mean that you need to think strategically.” Sarah Cowlishaw added that digital technologies are being used in drug development, particularly to help collect real world evidence.  Companies thinking about digital health in drug development need to consider other challenges such as data reliability, consent, and operability with other platforms.
  • Christina Kuhn described how different centers within FDA might decide whether a digital health solution is regulated as a device and whether a digital health solution would affect a pharma company’s responsibilities for a drug. Wade Ackerman noted that “companies approaching FDA should think carefully about how to present FDA with the information it needs to understand and assess the digital health innovation.  How companies approach the agency will depend on the particular digital health technology, including how it relates to a pharmaceutical product.”

Neela Paykel is general counsel at Proteus Digital Health.  Wade Ackerman (Los Angeles), Grant Castle (London), Christina Kuhn (DC), Sarah Cowlishaw (London) are all members of Covington’s global Food, Drug, and Device Regulatory Group and part of Covington’s cross-practice Digital Health team.  If you would like to view a recording of this webinar, please contact Jordyn Pedersen at

Open Source Considerations for Digital Health Ventures

Technology companies widely use open source software (“OSS”), which carries with it many potential benefits.  It can reduce the time and cost of development, and, to the extent that the code has been vetted by numerous other developers, may contain fewer bugs.  OSS can also reduce dependency upon third party vendors and associated pricing risks.

In the healthcare space in particular, OSS has been cited as one potential way to reduce the cost of developing and delivering digital care solutions, which in turn may mean improved access to or quality of treatment for underserved populations.[1] And indeed, OSS is frequently used in healthcare IT.  In fact, the EHR system for veterans, VistA, is available as open source code[2] and now deployed by a range of healthcare organizations.[3]

Of course, as with any third party technology, when incorporating OSS into a technology, it is important to carefully consider the soundness and security of the OSS code, as well as the legal terms on which the code is made available.  Below we highlight some key considerations for digital health ventures that either currently do or wish to use OSS for their technology: (1) security, (2) how license terms may impact the ability to commercialize the technology, and (3) how the use of OSS may impact corporate transactions, such as mergers and acquisitions.

Continue Reading

FDA Outlines Updated Approach to Regulating Digital Health Technologies

On December 8, FDA addressed the agency’s evolving approach to digital health by issuing two new draft guidance documents: “Clinical and Patient Decision Support Software” (the “CDS Draft Guidance”) and “Changes to Existing Medical Software Policies Resulting From Section 3060 of the 21st Century Cures Act” (the “Software Policies Draft Guidance”). These draft guidances announce the agency’s initial interpretation of the health software provisions enacted as part of last year’s 21st Century Cures Act (the “Cures Act”).

Given the rapid pace of digital health innovation across the life sciences, technology and health care sectors, FDA guidance on these topics is critical. Here are a few key takeaways from the draft guidances:

  • FDA’s initial interpretation of the Cures Act provision related to clinical decision support (CDS) software may lead to a fairly narrow carve-out—in other words, many cutting-edge CDS software functions could remain subject to FDA regulation.
  • FDA’s draft guidances do not directly address dynamic digital health solutions, such as those that incorporate machine learning, artificial intelligence (AI), or blockchain.
  • FDA has proposed an enforcement discretion approach for decision support software aimed at patients that generally parallels the regulatory approach for CDS software aimed at clinicians, even though patient decision software was not addressed directly in the Cures Act.
  • Consistent with the Cures Act, FDA’s draft guidances reflect that many of the software functions that were previously subject to FDA enforcement discretion (i.e., not actively regulated as devices) no longer meet the definition of “device.”
  • Significant for pharmaceutical companies, CDER joined one of the draft guidances, and that draft guidance makes clear that other FDA requirements may apply to digital health products disseminated by or on behalf of a drug sponsor beyond those outlined in the draft guidance.

FDA’s regulatory approach has a significant impact on the investment in and development of digital health solutions across the digital health ecosystem. Stakeholders should consider submitting comments to the agency to help shape the direction of FDA’s final guidances on these topics.

Continue Reading

The Evolving FDA and EU Equivalent Regulation of Digital Health: A Device Perspective

On November 14, lawyers from Teva Pharmaceuticals and Covington & Burling discussed digital health innovation from a medical device regulation perspective in the U.S. and the EU. The presentation by Rachel Turow, Executive Counsel – Regulatory Law, Teva Pharmaceuticals, and Grant Castle, Scott Danzis, Sarah Cowlishaw, and Christina Kuhn of Covington, covered topics such as which digital health products the FDA’s Center for Devices and Radiological Health (CDRH) regulates, how “device” is defined by the recent 21st Century Cures Act, and the relationship between medical devices and software under EU law. The group also discussed how digital health associated with pharmaceuticals may implicate regulatory considerations under FDA’s drug authorities — a topic to be more fully explored at an upcoming Covington Digital Health webinar in early 2018.

Some of the key takeaways the panel discussed are:

  • Understanding a digital health product’s intended uses and functionalities is critical to whether product will be regulated.
  • The CDRH’s approach to digital health is evolving, and CDRH has adopted a more flexible approach to digital health as compared to other product areas.
  • The FDA isn’t the only regulator to consider — other regulators such as FTC, CPSC, state AGs, and DOJ are becoming more engaged in this area.
  • Companies marketing or expecting to market products in the EU should design new software medical devices with the EU’s Medical Device Regulation in mind.

This is the second of a series of webinars Covington is offering to help companies navigate the laws, regulations, and policies that govern the evolving Digital Health sector. The webinars are aimed at:

  • Legal, regulatory, and policy teams at life sciences and technology companies involved in the development and marketing of digital health technologies
  • Legal, regulatory, and policy professionals with backgrounds in the “traditional” pharma-biotech and medical device space, who are looking to move into the digital health space

If you would like to view a recording of this one hour webinar, please contact Jordyn Pedersen at


Digital Health Checkup (Bonus): Product Liability and Insurance Coverage

Digital Health

In this bonus edition of our checkup series, Covington’s global cross-practice Digital Health team considers some additional key questions about product liability and insurance coverage that companies across the life sciences and technology sectors should be asking as they seek to fit together the regulatory and commercial pieces of the complex digital health puzzle.

1. What are the key questions when crafting warnings and disclosures?

If your product is regulated, your warnings and disclosures will need to comply with any relevant regulations. In the case of a product not regulated by the FDA or equivalent regulatory body, first consider how your warnings and disclosures will be incorporated into the use of the product.

Some disclosures, like an explanation of the data source used by software, may fit best in terms and conditions that a user sees before using the product. Key warnings, however, may be more appropriately placed as part of the user experience.

Example: A warning that patients should consult their doctors if necessary may need to be placed in proximity to specific medical content.

Best Practice: Consider your intended audience: are you writing warnings for doctors, patients, or institutions? The appropriate types of disclosures will vary across populations. Patient-directed warnings may also need to be written in simplified language.

Best Practice: Consider whether it is appropriate for your product to have users to accept or otherwise be required to agree to the warnings and disclosures.

Continue Reading

U.S. FCC Repeals 2015 Net Neutrality Rules; Impact on Digital Health Solutions Debated

Today, as expected, the U.S. Federal Communications Commission (“FCC”) adopted an order repealing the agency’s 2015 net neutrality rules and changing the legal framework that governs Internet Service Providers (“ISPs”). The vote split along party lines, with the agency’s three Republicans voting in favor and its two Democrats dissenting.

Once today’s order goes into effect, ISPs will no longer be subject to rules or FCC oversight as to what they can or cannot do in delivering online traffic to and from consumers at home and on their mobile devices. The FCC did, however, retain a requirement that ISPs publicly disclose whether they engage in certain practices, such as accepting consideration in exchange for prioritizing some sites and services over others (a practice known as “paid prioritization”).

Life sciences, technology, and health care companies developing and marketing digital health solutions should be aware that some supporters of the FCC’s action have argued that the repeal of restrictions on paid prioritization will allow ISPs to partner with digital health applications for optimized network performance within the U.S. As an example, the FCC has cited a commenter expressing the view that paid prioritization could provide improved access to “remote health-care monitoring” and “health service delivery by mobile networks.” Opponents, however, claim that the FCC’s action will allow ISPs to act in ways that could limit the ability of some online applications—whether in digital health or other sectors—to survive and thrive online.

The final text of the FCC’s order is not yet available, but it is not expected to deviate significantly from a draft released last month. Opponents are expected to challenge it in court in early 2018, and debates over net neutrality will continue in Congress as well. Covington’s Digital Health team will continue to follow these FCC developments given the potential impact on certain digital health products and services.

Digital Health Checkup (Part Three): Key Questions About AI, Data Privacy, and Cybersecurity

In the third installment of our series, Covington’s global cross-practice Digital Health team considers some additional key questions about Artificial Intelligence (AI), data privacy, and cybersecurity that companies across the life sciences and technology sectors should be asking to address the regulatory and commercial pieces of the complex digital health puzzle.

AI, Data Privacy, and Cybersecurity

1. Which data privacy and security rules apply?
There currently is not a specific law or regulation governing the collection, use, or disclosure of data for AI or the cybersecurity of AI technologies. As a result, digital health companies must assess how existing privacy and security rules will be interpreted and applied in the context of AI.

The applicable laws and regulations governing data privacy and security depend on a variety of criteria, including where you are located and where you are offering the AI technology.

Here are a few regional considerations for AI in the U.S. and data privacy and cybersecurity in the EU and China:

United States
Because large datasets of information typically are necessary to train and test AI technologies, digital health companies that are developing or utilizing AI should consider whether individuals receive adequate notice and are provided appropriate choices over how their information is collected, used, and shared for such purposes. For example, a person might have different expectations about how their information is being collected and used depending on whether they are communicating with a digital health AI assistant provided by a hospital, pharmaceutical company, or running shoe manufacturer. Consequently, providers of such technologies should consider clearly and prominently explaining who is operating the assistant and the operator’s information practices.

Depending on whether and to what extent you have a business relationship with or obtain information from a healthcare provider or other covered entity in order to develop or implement your AI, you may need to comply with the more specific privacy and data security requirements contained in HIPPA and state medical privacy laws in California and Texas.

Similarly, the collection and use of genetic information, biometric identifiers and information (based, for example, on facial recognition or scans, fingerprints, or voiceprints) trigger a patchwork of other federal and state laws.

The United States also regulates the security of connected products and the networks and systems on which they rely. The FTC historically has been the primary enforcement agency responsible for ensuring the “reasonableness” of product, system, network, and data security under Section 5 of the FTC Act. The FDA also has published pre- and post-market guidance on cybersecurity expectations with respect to connected medical devices. Both the FTC and the FDA recognize that responsibility for ensuring consumers against cyber threats applies to the entire product lifecycle—from initial design through vulnerability assessments, patching, and end-of-life considerations.

European Union
If you have a presence in the EU, offer services or goods there, or monitor the behavior of individuals there, you may be subject to the new EU General Data Protection Regulation (“GDPR”; see our checklist on this topic)—a complex law backed by fines of up to 4 percent of global annual turnover (or €20,000,000), obligations to appoint local representatives and data protection officers, etc. It contains strict limits and conditions on the collection, use, and sharing of health data, genetic data, and biometric data, and requires extensive internal policies, procedures, and even the building of “data portability” features allowing individuals to export their data to rival services.

The EU’s “cookie rule” also prohibits most storage of data to, or reading of data from, Internet-connected devices without prior informed consent. Finally, many EU countries also have confidentiality rules that further restrict the collection and use of patient data, plus detailed health cybersecurity rules, such as a French law that requires all services hosting patient data to have first obtained Ministry of Health accreditation.

Healthcare data is also considered sensitive in China, and will soon be subject to more stringent requirements under the Information Security Technology – Personal Information Security Specification, in addition to existing data protection and cybersecurity obligations imposed by China’s Cybersecurity Law (see our recent post on this topic).

China also has regulations governing medical records and population health information, such as the Medical Institution Medical Records Administrative Rules and the Administrative Measures for Population Health Information.

Best Practice: Identify the jurisdictions that you operate in or offer your services, and those that present the highest risk to your company. Then assess what data you collect and the purposes for which you use it to identify which specific laws and regulations apply.

2. How do you ensure that you have the necessary rights to collect, use, and disclose data in connection with your AI technologies?
When collecting information directly from users of the AI, you should be transparent about the types of information you collect, how you use it, whether and to whom you disclose it, and how you protect it. It is critical that these disclosures be accurate and include all material information.

When developing, training, and testing AI technologies, companies also look to existing data sources. If the company is using personal data that it previously collected, it should consider whether individuals had sufficient notice that the information would be used for purposes of developing and improving new kinds of digital health solutions. When obtaining this information from third-party sources, the company should consider contractual representations and warranties that ensure all necessary notices, consents, rights, and permissions were provided and obtained to permit the company to use the data as contemplated in the agreement.

In some cases, it also might be appropriate to provide users choice over how their information is collected, used, and shared. In the EU, for example, the GDPR outlaws consent statements that are buried in small print: for digital health purposes, consent will need to be clear, granular, and specifically opted into in order to be valid. In the EU, regulators also are starting to hold recipients liable for inadequate due diligence—merely obtaining contractual assurances from data sources may not be enough.

Best Practice: Notice typically is provided through a privacy policy, but the interactive nature of AI technologies mean that innovative just-in-time disclosures and privacy choices might be possible.

3. What are the fairness and ethical considerations for AI and data analytics in digital health solutions?
To maximize the potential of artificial intelligence and big data analytics, it is important to ensure that the data sets that are used to train AI algorithms and engage in big data analytics are reliable, representative, and fair.

Example: Some diseases disproportionately impact specific populations. If the data sets used to train the AI underlying your digital health offerings are not representative of these populations, the AI might not be effective. It also is critical that the data sets underlying your AI and data analytics are secured against unauthorized access or misuse.

In its report on “big data,” the FTC cautions companies to consider whether data sets:

  • are representative;
  • whether data models account for biases;
  • whether predictions based on big data are accurate; and
  • whether reliance on big data raises other ethical or fairness concerns.

Best Practice: Some companies are forming internal committees to ensure that their use of AI and data analytics is ethical and fair.

The EU also has detailed privacy rules impacting big data and AI. For instance, it grants all individuals a right to human review of fully automated decisions based on analysis of their data—and in many cases prohibits the basing of such decisions on sensitive data, such as their health data, ethnicity, political opinions, genetics, etc. It also outlaws any disclosure or secondary use of data originally collected for a different purpose, unless certain conditions are met, including that it meets the conditions to be considered “compatible” with the original uses (e.g., the new use must be within the individuals’ reasonable expectations).

Of note, if the digital health solution is potentially regulated by the FDA or an equivalent regulatory body, there may be additional pre-market and post-market considerations (e.g., validation of clinical recommendations using AI, adverse event reporting; see our earlier checkup on this topic).