AI Standards Update: NIST Solicits Comments on the Four Principles of Explainable Artificial Intelligence and Certain Other Developments

The National Institute of Standards and Technology (“NIST”) is seeking comments on the first draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), a white paper that seeks to define the principles that capture the fundamental properties of explainable AI systems.  NIST will be accepting comments until October 15, 2020.

In February 2019, the Executive Order on Maintaining American Leadership in Artificial Intelligence directed NIST to develop a plan that would, among other objectives, “ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.”  In response, NIST issued a plan in August 2019 for prioritizing federal agency engagement in the development of AI standards, identifying seven properties that characterize trustworthy AI—accuracy, explainability, resiliency, safety, reliability, objectivity, and security.

NIST’s white paper focuses on explainability and identifies four principles underlying explainable AI.

  • Explanation. AI systems must supply evidence, support, or reasoning for their outputs.  Researchers have developed different models to explain AI systems, such as self-explainable models where the models themselves are the provided explanation.
  • Meaningful. The recipient must understand the AI system’s explanation.  This principle is a contextual requirement—for example, different types of user groups may require different explanations, or a particular user’s prior knowledge, experiences, and mental processes may affect meaningfulness.  Hence, tailoring is necessary for effective communication.
  • Explanation Accuracy. The explanation must correctly reflect the AI system’s process for generating its output.  In contrast to decision accuracy, explanation accuracy is not concerned with whether or not the system’s judgment is correct.  It is referencing how the system came to its conclusion.  The principle is also contextual—there may be different explanation accuracy metrics for different types of groups and users.
  • Knowledge Limits. The AI system must identify cases it was not designed or approved to operate, or where its answers are not reliable.  This ensures that reliance on AI system’s decision processes occurs only where it is appropriate.

The white paper states that explanations generally can be described along two dimensions: the amount of time the consumer has to respond to the information and the level of detail in the explanation.  Although flexibility in the range and types of explanations will be necessary, NIST provides a non-exhaustive list of explanation categories, drawing from academic literature:

  • User Benefit.  This type of explanation is designed to inform a user about an AI system output, such as providing the reason a loan application was approved or denied to the applicant.
  • Societal Acceptance.  This type of explanation is designed to generate trust and acceptance by society, to provide an increased sense of comfort in the system.
  • Regulatory and Compliance.  This type of explanation assists with audits for compliance with regulations, standards, and legal requirements, such as providing detailed explanation to a safety regulator to evaluate the output of self-driving cars.
  • System Development.  This type of explanation assists with developing, improving, debugging, or maintaining an AI system by technical staff and product managers.
  • Owner Benefit.  This type of explanation benefits the operator of a system, such as a recommendation system that lists movies to watch and explains the selection based on previously viewed items.

After explaining the core concepts of explainable AI systems, NIST explores the explainability of human decision processes.  NIST states that humans demonstrate only a limited ability to meet the four principles described above, which provides a benchmark to evaluate explainable AI systems and informs the development of reasonable metrics.  According to NIST, evaluating explainability in context of human decision-making also may lead to better understanding of human-machine collaboration and interfaces.

Although the white paper does not provide detailed guidance for organizations implementing AI systems, it represents an important step by NIST to develop trustworthy AI tools.  Documents from other jurisdictions on explaining AI provide more detailed guidance aimed at helping organizations operationalize the concept of explainable AI.  The UK Information Commissioner’s Office (“ICO”), for example, issued on May 20, 2020 its final guidance on explaining decisions made with AI.  Similar to the NIST white paper, the ICO recognizes that there are different underlying principles to be followed and different models of AI explanation.  The ICO takes these principles one step further, however, and provides more detailed guidance on how to explain AI in practice, depending on the type of AI system used.

Some Legislative Developments Relating to NIST

Efforts to advance the development of AI standards through NIST has been a topic of increasing focus in Congress.  Recent bills include Sen. Cory Gardner’s (R-CO) Advancing Artificial Intelligence Research Act of 2020, which would appropriate $250 million to NIST for each of fiscal years 2021 through 2025 for the creation of a national program to advance AI research, and Rep. Eddie Bernice Johnson’s (D-TX-30) National Artificial Intelligence Initiative Act of 2020, which would appropriate over $50 million to NIST for each of fiscal years 2021 through 2025 for the research and development of voluntary standards for trustworthy AI systems, among other activities.  The House Appropriations Committee also released the draft fiscal year 2021 Commerce, Justice, Science, and Related Agencies funding bill, where $789 million is included for core NIST research activities, an increase of $35 million above the FY 2020 enacted level.

To learn more about AI, please access our AI Toolkit.

FDA Issues Final Guidance on Multiple Function Digital Health Software and Other Devices

On July 28, 2020, FDA announced the publication of a final guidance on Multiple Function Device Products: Policy and Considerations that outlines FDA’s evolving approach to the regulation of multiple function device products, including software.

The concept of “multiple function” products was introduced by the 21st Century Cures Act (“Cures Act”) of 2016, which added section 520(o) to the FD&C Act.  Multiple function device products are those with multiple functions that each have a distinct purpose in the product (e.g., collection, storage, analysis) where only certain functions are actively regulated by FDA.  With regard to software, section 520(o) of the FD&C Act gives FDA the authority to review the non-device function(s) of a multiple function device product to assess the impact of the non-device function(s) on the device function(s).

Here are the key takeaways on FDA’s newly-issued final guidance:

  • While the Cures Act language distinguishes device functions from non-device functions, FDA adopts a final policy that distinguishes between device functions and other FDA says that “other functions” include not only non-device functions, but also device functions that are exempt from premarket review (i.e., 510(k)-exempt), as well as device functions that fall within FDA’s exercise of enforcement discretion.
  • The same approach should apply to FDA’s assessment of all multiple function device products, whether software, hardware or both.
  • For multiple function device products, manufacturers should perform impact assessments for all “other functions” to assess any effects of the other functions on the device functions of the product – reaching a conclusion of no impact, positive impact, or negative impact.  These assessments should be documented as part of the device’s design validation process.  In the event that an impact is found, the extent of the impact should be evaluated and included in the manufacturer’s hazard analysis.
  • FDA expects that impact assessments be included as part of a premarket submission when there is a (i) negative impact or (ii) positive impact that the manufacturer seeks to include in the product’s labeling.  For a finding of no impact or a positive impact that the manufacturer does not seek to include in the product’s labeling, FDA does not expect to see the impact assessment as part of the premarket submission, although FDA may review the documentation as part of an inspection.
  • FDA broadly defines a potential “negative” impact of an “other function” on the device function(s).  Thus, as a practical matter, it is likely that it will be difficult for a manufacturer to conclude that the “other functions” have no impact on the device function.  We anticipate that manufacturers will need to submit impact assessments for a large number of multiple function device products.
  • One open question is the possible impact of FDA determining that a company failed to submit an impact assessment that, in FDA’s view, was required as a part of the premarket review of a multiple function device product.  For example, if a company makes a good faith determination that a non-device function has no impact on the device functions of a multiple function device product, and submits a 510(k) or PMA without an impact assessment, but FDA later disagrees with that determination, would the agency take the position that the 510(k)/PMA was ineffective and not properly obtained?  Similarly, would FDA exercise enforcement discretion in such a situation to allow the company to keep the product on the market while it submits the impact assessment and other documentation associated with the non-device functions?

To help companies navigate these issues, FDA’s guidance provides several case studies of multiple function device products and what FDA would expect to see as part of a premarket submission for the device functions.  Companies developing multiple function device products will want to ensure that they consider all aspects of the FDA’s final guidance.

UK ICO publishes guidance on Artificial Intelligence

On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AItrade-offs, and bias and discrimination, all covered in Covington blogs).

Continue Reading

California Legislature Advances Privacy Legislation

Our colleagues at the Inside Privacy blog have summarized a proposed bill in California (the Genetic Information Privacy Act) that would impose certain privacy obligations on direct-to-consumer genetic testing companies that go beyond the California Consumer Privacy Act.  This summary may be of interest to entities that process genetic data in California.

The “Agile” Path to Market: An Alternative Approach to Food Industry R&D

Software development can teach us a lot about streamlining the research and development (R&D) process in other industries.  “Agile development”, or the process of dividing up an R&D project into smaller, more iterative segments instead of planning the entire project at its inception, is a hallmark of the software development process.  In a recently published article in Food and Beverage Insider entitled “The ‘Agile’ Path to Market: An Alternative Approach to Food Industry R&D”, Nigel Howard and Chase Brennick show how agile development can be valuable for R&D in many different contexts.   The article focuses on the suitability of agile development for R&D within the food industry, but illustrates the benefits of an agile R&D process for industries that are subject to evolving consumer preferences and rapidly changing regulatory landscapes – characteristics that are also present for companies in the digital-health space.  As described in the article, agile development could be a powerful tool to help digital health companies make their R&D more nimble and maintain greater oversight of the development process on a near-real-time basis.

Germany Prepares New Law for Patient Data Protection and Increased Digitalisation in Healthcare and for “Data Donations” for Research Purposes

On 3 July 2020, the German parliament passed a draft bill (German language) for patient data protection and for more digitalisation in the German healthcare system (Patientendaten-Schutz-Gesetz). The draft bill is currently in the legislative procedure and is expected to enter into force in autumn 2020.

One of the main objectives of the bill is to make everyday life easier for patients and healthcare professionals by increasing use of innovative digital applications, while protecting sensitive health data. It is assumed that increased digitalisation in the healthcare sector will open up opportunities at all levels of healthcare, both for patients and healthcare providers. As such, it is expected that digitalisation will help to take care of the growing number of chronically ill patients, to relieve the burden on specialists, to make better use of resources and to prepare the healthcare system for the challenges of the future.

A series of documents that so far has only been provided and used in hardcopy, such as certain prescriptions or patient files, will now be made available in digital form. In addition, a special app shall be made available to enable patients to redeem digital prescriptions in pharmacies. Alternatively, patients may present a 2D barcode on paper. In this case, the prescription will also be transmitted to the pharmacy in digital format. Further, the law aims to enable patient referrals from one doctor to medical specialists to be made in digital form (currently this is done in writing and requires the referral to be collected from the doctor’s office).

From 2021, statutory health insurance providers will be obliged to offer their insured persons electronic patient files (ePA). To ensure that this is effectively used, patients may request that their doctor include their medical records in their personal ePA. In addition, from 2022, the ePA will also be able to display other information that is currently only documented in hardcopy, for example, maternity logs, paediatric health records and vaccination cards. To incentivise doctors, they will be paid to use ePAs. Patients will ultimately have control over their ePAs and be able to decide which data is stored there and who will have access. For example, patients may specify that a doctor may have access to the ePA, but that certain findings are not displayed. The protection of the processed patient data is ensured by a gapless regulation of the chain of responsibilities.

From 2023 onwards, patients will have the option of voluntarily making the data in their ePAs available to researchers as part of a “data donation”. The donation could become an important element to increasing the availability of real-world evidence on new treatments and medicines. Informed consent will be required from each of the patients, and it will be possible for that consent to be given digitally. Patients will be free to choose the scope of their data donation and can limit access to certain information. The data that is released will be restricted to certain research purposes, like research on improving the quality of healthcare.

This new law will have a significant impact on the digitalisation of the entire German healthcare system. It will also create a better infrastructure for research with patient data and for collecting real-world-evidence for scientific and regulatory purposes.

Healthcare companies, providers and payors as well as technology and research companies should closely follow the next steps of this legislative development in Germany.

EHR Interoperability: Public Health Benefits & Privacy Considerations

Public-health researchers, officials and medical professionals rely on data to track outbreaks, advance research, and evaluate prospective treatments. One critical source of patient data comes from electronic health records (EHRs).  EHR data in the U.S. has traditionally been siloed within hospital IT systems, but the federal government and key healthcare stakeholders have recently ramped up efforts to implement greater EHR data-sharing capabilities and improve patient access to their own electronic health information.  Though the potential public-health benefits of EHR interoperability are many, these stakeholders must carefully balance these benefits against their imperative to protect and maintain the privacy of patient health data.

Public-Health Benefits of Interoperability

Electronic exchange of de-identified EHR data has already yielded real-world public-health benefits.  For example, effective sharing of patient data can play a major role in large-scale responses to pandemics such as COVID-19.  A group of medical professionals published an article in the Journal of the American Medical Informatics Association illustrating the importance of cross-border data sharing in responding to pandemics.  The authors note that EHR travel screening questionnaires can help identify patients who have recently visited areas where community spread is present.  This travel data can be used to track the spread of the disease and evaluate the effectiveness of travel restrictions and other mitigation measures.  Based on this data, public-health leaders can determine how to allocate resources such as masks and pop-up hospitals.

Researchers can also use large-scale patient data to evaluate the efficacy of potential treatments.  A team of researchers at Columbia University analyzed 30 years of medical records (representing over six million patients) to determine the effectiveness of hydroxychloroquine as a treatment for patients hospitalized with COVID-19.  Recognizing the utility of EHR data as a tool for pandemic research, several EHR vendors – including Epic and Cerner (through its HealtheDataLab) – are making aggregated patient data available to researchers in the search for treatments and vaccines for COVID-19.

For medical providers, data-sharing across sites enables more efficient patient care, which in turn helps manage patient loads.  The U.K.’s National Health Service recently contracted for Cerner to supply its Millennium EHR system in the temporary Nightengale Hospital in London.  Providers at Nightengale Hospital can access patient records and results from other sites due to data-sharing capabilities between the temporary hospital and other providers in the Barts Health NHS Trust, which already incorporates data from local and community care providers via Cerner’s Health Information Exchange.  Providers at the pop-up hospital plan to use this data to “drive quicker discharge of recovered patients and maximi[z]e hospital capacity.”

Access to unified EHR datasets, when properly leveraged, can even help to triage patients.  In Israel, Maccabi Healthcare Services – in a partnership with AI company Medial EarlySign – is using data gleaned from millions of Maccabi’s patient health records to predict which of its 2.4 million members are high-risk for severe COVID-19 complications, so those patients can be fast-tracked for testing.  Maccabi is currently talking to U.S. entities about using the system to fast-track their own patients for testing.

Balancing The Public-Health Benefits Against Patient Privacy Considerations

Patient privacy should be top-of-mind when leveraging EHR data in pursuit of the aforementioned (or other) public-health benefits.   Legal frameworks such as the Health Insurance Portability and Accountability Act (HIPAA) and the EU’s General Data Protection Regulation (GPDR) safeguard individuals’ rights with respect to the exchange of their protected health information (“PHI”) (which includes individually identifiable health information generated by certain covered entities).  While privacy laws could have the effect of stemming the flow of EHR data-sharing, these laws are intended to strike a balance between individual rights and the public health: The U.S. Department of Health and Human Services (HHS) describes HIPAA as “balanced so that it permits the disclosure of personal health information needed for patient care and other important purposes.”[1]   Thus, these privacy laws should not prevent use of EHR data for the public good if the proper measures are taken with respect to individual patients’ data.

To ensure such measures are in place, EHR interoperability stakeholders must consider a couple of key issues:

  1. De-Identifying Patient Data

Entities using and sharing EHR data may use proper de-identification or anonymization techniques to steer clear of privacy law violations.  Different laws present different legal frameworks for proper protection of EHR data.  In the U.S., HHS clarifies that HIPAA “does not restrict the use or disclosure of de-identified health information, as it is no longer considered protected health information.”  De-identification refers to the process of removing personal identifiers that could be used to trace data back to the individual.   This can include removal of names, geographic identifiers smaller than a state, telephone numbers and e-mail addresses, medical record numbers, and other types of potentially identifying data.

From a European perspective, Recital 26 of the GDPR states that “the principles of data protection should…not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.”  Anonymization under GDPR is a difficult standard to achieve: Data is not treated as anonymous under the GDPR unless both (1) the data is anonymized in such a way as to make identification of the subject impossible (or extremely impractical), even for the party responsible for anonymizing the data, and (2) the process is irreversible. However, the GDPR also includes the concept of “pseudonymization” that may be useful in mitigating the legal risks posed by data-sharing. Article 4(5) of the GDPR defines as “the processing of personal data in such a manner that personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately.”  Because pseudonymized data may be re-identifiable if the “additional information” is not properly separated from the de-identified data set, it poses a greater risk than anonymization.  Nevertheless, if the proper technical and organizational measures are implemented to protect pseudonymized data, such data may be usable for public-health purposes: under some circumstances, Article 6(4)(e) of the GDPR permits the processing of pseudonymized data “for a purpose other than that for which the personal data [was] collected.”

When using patient data for public-health purposes, the data should be protected to the most secure extent that still allows the research, and the user and sharer of such data should query whether their strategy of de-identification, anonymization or psuedonymization (in conjunction with the data security measures discussed below) is sufficient to protect patient privacy.

  1. Data Security

Securing patient data is critical.  Although patient data is less likely to be subject to HIPAA or the GDPR if it is de-identified and/or aggregated, even de-identified and aggregated data should be secured to mitigate the risk that the data could be traced back to individual patients in the event of a security breach.  The steps to properly secure EHR data can be largely broken into three categories: (1) physical safeguards (e.g., locks on servers and laptops), (2) administrative safeguards (e.g., designing comprehensive security plans, conducing security trainings), and (3) technical safeguards (e.g., firewalls, two-factor authentication).  When storing and sharing EHR data, some combination of all three categories of safeguards is likely needed to ensure proper data security.


Although obstacles remain in the push to implement EHR interoperability, the public-health benefits of effective patient data-sharing are undeniable.  Aggregated patient data can enable nimbler pandemic responses, streamline the research process, and help hospitals provide more efficient and effective treatment.  With the pandemic driving stakeholders across the healthcare industry to make patient data more accessible, large-scale aggregated EHR data may someday be widely available to benefit public health efforts.  At the same time, the use and sharing of such data presents real questions of privacy, and safeguards will need to be put in place to protect and secure patient data.  As we move toward a world with more readily accessible healthcare data, it will be important to maintain a balance that maximizes the public-health benefits of such data while also upholding the privacy rights of individuals.

[1] In recognition of this balance, and in light of the acute public-health needs presented by the COVID-19 pandemic, HHS announced that it would relax HIPAA enforcement against certain covered entities that chose to participate in Community-Based Testing Sites during the pandemic.

SAMHSA Revises Part 2 Regulations for the Confidentiality of SUD Patient Records

On July 13, 2020, the U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration (SAMHSA) issued a final rule revising the Confidentiality of Substance Use Disorder Patient Records regulations located at 42 C.F.R. Part 2, commonly referred to as “Part 2.”  Under Part 2, federally assisted substance use disorder (SUD) treatment programs are prohibited from disclosing patient identifying information without the individual’s written consent except in a few limited circumstances.  According to SAMHSA, the “emergence of the opioid crisis, with its catastrophic impact” has underscored “the need for thoughtful updates to [Part 2].”  The final rule also “takes important first steps toward the greater flexibility for information sharing envisioned by Congress in its passage of § 3221 of the [Coronavirus Aid, Relief, and Economic Security (CARES)] Act,” discussed in more detail below.

The Part 2 regulations were originally promulgated in 1975 to ensure the confidentiality of SUD treatment records, prior to the enactment of broader health privacy laws and regulations, such as the regulations promulgated under the Health Insurance Portability and Accountability Act of 1996 (HIPAA).  The Part 2 restrictions are generally more stringent than the HIPAA restrictions, since HIPAA allows the disclosure of protected health information for certain purposes without an individual’s authorization.  SAMHSA’s final rule revises the Part 2 regulations in an effort to more closely align the privacy requirements of Part 2 and HIPAA, as well as to better address the needs of individuals with SUD and facilitate coordinated care.

The final rule does not change Part 2’s general framework for protecting the confidentiality of SUD patient records.  Disclosure of SUD treatment records is still prohibited without patient consent, except in a few limited circumstances (e.g., medical emergencies).  In addition, law enforcement officials are still prohibited from using SUD patient records in a criminal prosecution brought against the patient, unless they obtain a court order.

The final rule does revise certain aspects of the Part 2 regulations:

  • Applicability: The final rule clarifies applicability to certain non-Part 2 providers (e., providers of treatments that are not regulated as programs under Part 2).  Pursuant to the final rule, records created by a non-Part 2 provider containing information about a SUD—based on such provider’s own patient encounters—are not subject to the Part 2 restrictions, as long as the non-Part 2 provider segregates any specific SUD records received from a Part 2 program.  In addition, the definition of “Records” is modified to create an exception for information conveyed orally by a Part 2 program to a non-Part 2 provider for treatment purposes with patient consent.  Under the revised definition, such information does not become subject to Part 2 restrictions just because it is reduced to writing by the non-Part 2 provider.
  • Consent Requirements: The final rule revises the consent requirements to permit patients to consent to the disclosure of their information to any entity (e.g., the Social Security Administration) without requiring that the consent identify a specific individual to receive the information on behalf of the entity.  The final rule also includes special instructions for consents relating to disclosures to information exchanges and research institutions, as well as additional guidance for consents relating to disclosures for the purpose of case management and care coordination.
  • Re-Disclosure: The final rule changes the notice language required to accompany disclosures to clarify that non-Part 2 providers are not required to redact SUD information contained in a non-Part 2 record, and to allow for re-disclosure with express written consent or as otherwise permitted by the regulations.
  • Disclosure Permitted with Written Consent: The final rule expressly permits disclosures for purposes of “payment and health care operations” with the patient’s written consent, and provides an illustrative list of 18 activities that are considered payment and health care operations, such as “patient safety activities,” “activities relating to addressing fraud, waste and/or abuse,” and “care coordination and/or case management services in support of payment or health care operations.”
  • Disclosure to Central Registries and PDMPs: The final rule amends the disclosure requirements to (A) allow non-opioid treatment providers to access central registries, in order to determine whether a patient is already receiving opioid treatment; and (B) allow opioid treatment providers to disclose dispensing and prescribing data, as required by applicable state laws, to prescription drug monitoring programs (PDMPs), subject to patient consent.
  • Medical Emergencies: The final rule expands a “bona fide medical emergency” to include situations where normal operation of a Part 2 program is suspended, and the program is unable to obtain the required written consent due to a state of emergency declared by the state or federal authority as the result of a natural or major disaster.  Disclosures without consent are permitted until the Part 2 program is able to resume operations.
  • Research: The final rule amends the research exception to permit disclosures by a HIPAA covered entity or business associate, without patient consent, to individuals or organizations who are not HIPAA covered entities or subject to the Common Rule, for the purpose of conducting scientific research, provided the disclosures are made in accordance with the HIPAA requirements at 45 C.F.R. § 164.512(i).
  • Audit and Evaluation: The final rule clarifies the specific situations that fall within the scope of permissible disclosures for audits and/or evaluations by federal, state, and local government agencies and third-party payers.
  • Undercover Agents and Informants: The final rule amends the period of time for court-ordered placement of an undercover agent or informant in a Part 2 program from 6 months to 12 months, and clarifies that the time period begins when the agent/informant is placed in the Part 2 program.

In addition, the final rule provides guidance for Part 2 program employees, volunteers, and trainees regarding the receipt of incidental communications from SUD patients on personal devices.  SAMHSA recognizes that patients may reach out to employees through personal devices or email accounts that are not used in the regular course of business.  The guidance clarifies that such personal devices/accounts do not become part of the Part 2 record or subject to Part 2’s sanitization standards.  Instead, the employees (or volunteers or trainees) should immediately delete the communication from their personal device/account and respond to the patient only through an authorized channel provided by the Part 2 program, unless responding from the personal device/account is in the best interest of the patient.  If the communication contains patient identifying information, it should be forwarded to such authorized channel and then deleted.

Finally, SAMHSA notes that Section 3221 of the CARES Act amended several provisions of the Part 2 authorizing statute, including the requirements for consent, restrictions for the use of records in legal proceedings, and penalties for violations of the statute under sections 42 U.S.C. 290dd–2(b), (c) and (f), respectively.  The amended provisions allow greater flexibility for the sharing of SUD records, but the provisions do not go into effect until March 27, 2021.  Therefore, SAMHSA has stated that it intends the standards in this month’s final rule to “serve as interim and transitional standards,” until SAMHSA is able to engage in future rulemaking to implement the new changes enacted by the CARES Act.

Bipartisan Bill Seeks to Create National Artificial Intelligence Research Resource Task Force

On June 4, 2020, Representatives Anna Eshoo (D-CA-18), Anthony Gonzalez (R-OH-16), and Mikie Sherrill (D-NJ-11) introduced the National AI Research Resource Task Force Act.  This bipartisan bill would create a task force to propose a roadmap for developing and sustaining a national research cloud for AI.  The cloud would help provide researchers with access to computational resources and large-scale datasets to foster the growth of AI.

“AI is shaping our lives in so many ways, but the true potential of it to improve society is still being discovered by researchers,” explained Rep. Eshoo. “I’m proud to introduce legislation that reimagines how AI research will be conducted by pooling data, compute power, and educational resources for researchers around our country.  This legislation ensures that our country will retain our global lead in AI.”

The sponsors of the bill cited the recommendations from the National Security Commission on AI submitted to Congress in March 2020 as one of the original motivations for the bill.  The Commission described China’s increasing investment in global AI research and development and recommended that Congress launch a “National AI Research Resource infrastructure.”

The bill directs that the task force be composed of equal representation from academia, government, and industry.  The roadmap would specify how the U.S. should build, deploy, administer, and sustain the research cloud.  There is particular focus on proper infrastructure for the portal, including security requirements and capabilities to facilitate access to computing resources for researchers across the country.  The bill also directs the task force to consult with key organizations like the National Science Foundation and the National Institute of Standards and Technology.

Notably, the bill has garnered support from leading technology companies and research universities that work on AI-related issues across all sectors, including healthcare.  Senators Rob Portman (R-OH) and Martin Heinrich (D-NM) have also introduced a companion bill—National Cloud Computing Task Force Act (S. 3890)—in the Senate.

To learn more about AI, please access our AI Toolkit.

Bipartisan Bill Introduced in Senate Would Regulate COVID-19 Apps

Senators Maria Cantwell (D-WA) and Bill Cassidy (R-LA) introduced bipartisan legislation this week to address privacy issues in the COVID-19 era.  The proposal, entitled the “Exposure Notification Privacy Act,” would regulate “automated exposure notification services” developed to respond to COVID-19.  This bipartisan legislation comes on the heels of dueling privacy proposals from both political parties.  We previously analyzed the Republican “COVID-19 Consumer Data Protection Act” proposal introduced by Senate Commerce Chairman Roger Wicker (R-MS) on this blog and the Democratic “Public Health Emergency Privacy Act” proposal on this blog.

Below are descriptions of the notable provisions in the Exposure Notification Privacy Act:

  • In contrast to the Wicker proposal and the proposal introduced by House and Senate Democrats, both of which would cover symptom tracking and other apps, this new bipartisan proposal would be narrower by only regulating operators of so-called “automated exposure notification services.”  This is defined as any website or mobile application designed for use or marketing to digitally notify “an individual who may have become exposed to an infectious disease.”  Operators can be both for-profit and non-profit entities.
  • However, the definition of covered personal data is broader than some earlier proposals that only covered certain categories of health and location data. The new proposal covers all data linked or reasonably linkable to any individual or device that is “collected, processed, or transferred in connection with an automated exposure notification service.”  This definition is broader than the Republican proposal, which defined covered data to include health information, geolocation data, and proximity data.  It is also broader than the Democratic proposal, which included the same data elements as the Republican proposal while also covering certain medical testing data and contact information.
  • Like the Democratic and Republican proposals, many of the bipartisan proposal’s key requirements are consistent with existing federal or state privacy requirements or norms, including obligations to post a clear and conspicuous privacy policy and to maintain reasonable data security policies and practices.  It also requires data minimization.
  • Under the bipartisan bill, operators may not enroll individuals in automated exposure notification services without their affirmative express consent, which is the same as both the Democratic and Republican proposals.
  • However, the new proposal could curtail the ability of technologies to collect, process, or share an actual, potential or presumptive positive diagnosis of an infectious disease except when such diagnosis is confirmed by a public health authority or a licensed health provider.
  • The proposal requires operators to “collaborate with a public health authority in the operation” of their notification service.
  • The bill includes certain transfer restrictions.  Covered data may only be transferred for certain enumerated purposes, such as to notify enrolled individuals of potential exposure to an infectious disease, or to public health authorities or contracted service providers.
  • The bill obligates operators to delete all covered data upon request of the individual, as well as within 30 days of the receipt of such data, on either a rolling basis or “at such times as is consistent with a standard published by a public health authority within an application jurisdiction.”  Such deletion requirements do not apply to data retention for public health research purposes.
  • The bill distinguishes between operators and service providers, and only a subset of obligations—such as data deletion requirements—apply to service providers.  Service providers with “actual knowledge” that an operator has failed to adhere to certain standards required under the proposal would be obligated to notify the operator of the potential violation.
  • Similar to the Democratic proposal, this bill makes it unlawful for “any person or entity” to discriminate on the basis of “covered data collected or processed through an automated exposure notification service” or their choice “to use or not use” such a service.
  • While the Democratic and Republican proposals imposed public reporting obligations on covered entities, this bipartisan proposal would require such an obligation on the federal Privacy and Civil Liberties Oversight Board.  Under the proposal, the Board would be required to issue a report within one year after enactment that assesses “the impact on privacy and civil liberties of Government activities in response to the public health emergency related to” COVID-19 and makes recommendations for the future.

As with both the Republic and Democratic proposals, the Exposure Notification Privacy Act enforcement provisions name both the Federal Trade Commission and state Attorneys General.  Notably, the Act preserves the right for individuals to bring claims arising under various state laws, including consumer protection laws, health privacy or infectious diseases laws, civil rights laws, state privacy and data breach notification laws, and under contract or tort law.