In the third installment of our series, Covington’s global cross-practice Digital Health team considers some additional key questions about Artificial Intelligence (AI), data privacy, and cybersecurity that companies across the life sciences and technology sectors should be asking to address the regulatory and commercial pieces of the complex digital health puzzle.

AI, Data Privacy, and Cybersecurity

1. Which data privacy and security rules apply?
There currently is not a specific law or regulation governing the collection, use, or disclosure of data for AI or the cybersecurity of AI technologies. As a result, digital health companies must assess how existing privacy and security rules will be interpreted and applied in the context of AI.

The applicable laws and regulations governing data privacy and security depend on a variety of criteria, including where you are located and where you are offering the AI technology.

Here are a few regional considerations for AI in the U.S. and data privacy and cybersecurity in the EU and China:

United States
Because large datasets of information typically are necessary to train and test AI technologies, digital health companies that are developing or utilizing AI should consider whether individuals receive adequate notice and are provided appropriate choices over how their information is collected, used, and shared for such purposes. For example, a person might have different expectations about how their information is being collected and used depending on whether they are communicating with a digital health AI assistant provided by a hospital, pharmaceutical company, or running shoe manufacturer. Consequently, providers of such technologies should consider clearly and prominently explaining who is operating the assistant and the operator’s information practices.

Depending on whether and to what extent you have a business relationship with or obtain information from a healthcare provider or other covered entity in order to develop or implement your AI, you may need to comply with the more specific privacy and data security requirements contained in HIPPA and state medical privacy laws in California and Texas.

Similarly, the collection and use of genetic information, biometric identifiers and information (based, for example, on facial recognition or scans, fingerprints, or voiceprints) trigger a patchwork of other federal and state laws.

The United States also regulates the security of connected products and the networks and systems on which they rely. The FTC historically has been the primary enforcement agency responsible for ensuring the “reasonableness” of product, system, network, and data security under Section 5 of the FTC Act. The FDA also has published pre- and post-market guidance on cybersecurity expectations with respect to connected medical devices. Both the FTC and the FDA recognize that responsibility for ensuring consumers against cyber threats applies to the entire product lifecycle—from initial design through vulnerability assessments, patching, and end-of-life considerations.

European Union
If you have a presence in the EU, offer services or goods there, or monitor the behavior of individuals there, you may be subject to the new EU General Data Protection Regulation (“GDPR”; see our checklist on this topic)—a complex law backed by fines of up to 4 percent of global annual turnover (or €20,000,000), obligations to appoint local representatives and data protection officers, etc. It contains strict limits and conditions on the collection, use, and sharing of health data, genetic data, and biometric data, and requires extensive internal policies, procedures, and even the building of “data portability” features allowing individuals to export their data to rival services.

The EU’s “cookie rule” also prohibits most storage of data to, or reading of data from, Internet-connected devices without prior informed consent. Finally, many EU countries also have confidentiality rules that further restrict the collection and use of patient data, plus detailed health cybersecurity rules, such as a French law that requires all services hosting patient data to have first obtained Ministry of Health accreditation.

China
Healthcare data is also considered sensitive in China, and will soon be subject to more stringent requirements under the Information Security Technology – Personal Information Security Specification, in addition to existing data protection and cybersecurity obligations imposed by China’s Cybersecurity Law (see our recent post on this topic).

China also has regulations governing medical records and population health information, such as the Medical Institution Medical Records Administrative Rules and the Administrative Measures for Population Health Information.

Best Practice: Identify the jurisdictions that you operate in or offer your services, and those that present the highest risk to your company. Then assess what data you collect and the purposes for which you use it to identify which specific laws and regulations apply.

2. How do you ensure that you have the necessary rights to collect, use, and disclose data in connection with your AI technologies?
When collecting information directly from users of the AI, you should be transparent about the types of information you collect, how you use it, whether and to whom you disclose it, and how you protect it. It is critical that these disclosures be accurate and include all material information.

When developing, training, and testing AI technologies, companies also look to existing data sources. If the company is using personal data that it previously collected, it should consider whether individuals had sufficient notice that the information would be used for purposes of developing and improving new kinds of digital health solutions. When obtaining this information from third-party sources, the company should consider contractual representations and warranties that ensure all necessary notices, consents, rights, and permissions were provided and obtained to permit the company to use the data as contemplated in the agreement.

In some cases, it also might be appropriate to provide users choice over how their information is collected, used, and shared. In the EU, for example, the GDPR outlaws consent statements that are buried in small print: for digital health purposes, consent will need to be clear, granular, and specifically opted into in order to be valid. In the EU, regulators also are starting to hold recipients liable for inadequate due diligence—merely obtaining contractual assurances from data sources may not be enough.

Best Practice: Notice typically is provided through a privacy policy, but the interactive nature of AI technologies mean that innovative just-in-time disclosures and privacy choices might be possible.

3. What are the fairness and ethical considerations for AI and data analytics in digital health solutions?
To maximize the potential of artificial intelligence and big data analytics, it is important to ensure that the data sets that are used to train AI algorithms and engage in big data analytics are reliable, representative, and fair.

Example: Some diseases disproportionately impact specific populations. If the data sets used to train the AI underlying your digital health offerings are not representative of these populations, the AI might not be effective. It also is critical that the data sets underlying your AI and data analytics are secured against unauthorized access or misuse.

In its report on “big data,” the FTC cautions companies to consider whether data sets:

  • are representative;
  • whether data models account for biases;
  • whether predictions based on big data are accurate; and
  • whether reliance on big data raises other ethical or fairness concerns.

Best Practice: Some companies are forming internal committees to ensure that their use of AI and data analytics is ethical and fair.

The EU also has detailed privacy rules impacting big data and AI. For instance, it grants all individuals a right to human review of fully automated decisions based on analysis of their data—and in many cases prohibits the basing of such decisions on sensitive data, such as their health data, ethnicity, political opinions, genetics, etc. It also outlaws any disclosure or secondary use of data originally collected for a different purpose, unless certain conditions are met, including that it meets the conditions to be considered “compatible” with the original uses (e.g., the new use must be within the individuals’ reasonable expectations).

Of note, if the digital health solution is potentially regulated by the FDA or an equivalent regulatory body, there may be additional pre-market and post-market considerations (e.g., validation of clinical recommendations using AI, adverse event reporting; see our earlier checkup on this topic).

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Covington Digital Health Team

Stakeholders across the healthcare, technology and communications industries seek to harness the power of data and information technology to improve the effectiveness and efficiency of their products, solutions and services, create new and cutting-edge innovations, and achieve better outcomes for patients. Partnering with…

Stakeholders across the healthcare, technology and communications industries seek to harness the power of data and information technology to improve the effectiveness and efficiency of their products, solutions and services, create new and cutting-edge innovations, and achieve better outcomes for patients. Partnering with lawyers who understand how the regulatory, IP, and commercial pieces of the digital health puzzle fit together is essential. Covington offers unsurpassed breadth and depth of expertise and experience concerning the legal, regulatory, and policy issues that affect digital health products and services. To learn more, click here.

Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection…

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence, data processing for connected devices, biometrics, online advertising, endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, e-mail marketing, disclosures of video viewing information, and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based, global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.