CISA and MS-ISAC Release Joint Guide on Ransomware

On September 30, 2020, the Cybersecurity and Infrastructure Security Agency (“CISA”) and the Multi-State Information Sharing and Analysis Center (“MS-ISAC”) released a joint guide synthesizing best practices to prevent and respond to ransomware.  This guide was published the day before OFAC and FinCEN released their coordinated guidance on ransomware attacks that we previously summarized here.

Ransomware is malware that encrypts data on a victim’s device, thus rendering the data inaccessible, until a ransom is paid in exchange for decryption.  Both the nature and scope of ransomware incidents have become “more destructive and impactful” in recent years.  In particular, tactics of malicious actors include threatening to release stolen data or publicly naming victims as part of the extortion.  Accordingly, the guide encourages organizations to take proactive efforts to manage risks posed by ransomware and recommends a coordinated response to mitigate its impact.

The guide is divided into two parts.  First, the guide focuses on best practices for ransomware prevention, focusing on the common infection vectors—misconfigurations, internet-facing vulnerabilities, phishing, precursor malware infection, third party sources, and managed service providers.  For example, threat actors often gain access to an organization’s network through exposed or insecure remote desktop services.  Employing best practices for use of remote desktop protocol (“RDP”), closing unused RDP ports on firewalls, and tracking RDP login attempts are few of the recommended risk-mitigating exercises.  This part also outlines general best practices for cyber hygiene, including employing multi-factor authentication, implementing the principle of least privilege, and retaining and securing logs.  These actions not only mitigate the risk of ransomware but other cybersecurity threats as well.

The second part of the guide focuses on responding to ransomware in three stages.

  • Detection and Analysis. Immediate isolation and triage of impacted systems are the priorities.  Because threat actors may monitor the organization’s activity or communications following intrusion, the guide recommends using means to avoid the threat actor knowing it has been detected—such as communicating by phone and not email.  The guide recommends not paying the ransom, because such payments will not ensure that data is decrypted or that the system is no longer compromised.
  • Containment and Eradication. Depending on the ransomware variant, consulting with federal law enforcement or other trusted entities may be worthwhile, as security researchers may have already broken the encryption algorithms or have published information on ransomware binaries and associated registry values.  Otherwise, a methodical approach to identifying, containing, and removing any compromise to the system will be critical.
  • Recovery and Post-Incident Activity. Documenting the lessons learned from the ransomware will help inform future policies and procedures. Sharing this information can also benefit others in the community.

For organizations seeking further information on ransomware, the guide offers a list of resources from CISA and MS-ISAC.  These include regional CISA Cybersecurity Advisors, who advise on best practices to manage cyber risk.

Lawyers who support organizations that face ransomware threats should be familiar with these best practices for ransomware prevention and response, and initiate discussions about how their organizations can best prepare for and meet the threats posed by ransomware.  This is increasingly critical at a time when governmental regulators are warning private companies that payments to ransomware actors can implicate legal risks, such as sanctions risk and regulatory obligations under the Bank Secrecy Act.

 

California AG Settlement Suggests Privacy and Security Practices of Digital Health Apps May Provide Fertile Ground for Enforcement Activity

California Attorney General Xavier Becerra (“AG”) announced in September a settlement against Glow, Inc., resolving allegations that the fertility app had “expose[d] millions of women’s personal and medical information.”  In the complaint, the AG alleged violations of certain state consumer protection and privacy laws, stemming from privacy and security “failures” in Glow’s mobile application (the “Glow App”).  The settlement, which remains subject to court approval, requires Glow to comply with relevant consumer protection and privacy laws (including California’s medical privacy law), mandates “a first-ever injunctive term that requires Glow to consider how privacy or security lapses may uniquely impact women,” and imposes a $250,000 civil penalty.

According to the AG’s announcement, the “settlement is a wake up call not just for Glow, Inc., but for every app maker that handles sensitive private data.”  Below is a discussion of the complaint and settlement, as well as takeaways from the case.

 

The Complaint

As described in the complaint, the Glow App is “marketed as an ovulation and fertility tracker” and “collects and stores deeply sensitive personal and medical information related to a user’s menstruation, sexual activity, and fertility.”  The types of information collected include medications, fertility test results, medical appointments, medical records, and ovulation-cycle calculations, as well as “intimate details of [] sexual experiences and efforts to become pregnant.”  One feature of the Glow App is its “Partner Connection” offering, which “allows a Glow App user to link to a partner to share information.”

As alleged, Glow violated multiple laws, including California’s Confidentiality of Medical Information Act (“CMIA”).  The CMIA regulates, in relevant part, “providers of health care,” that collect and use “medical information,” defined as “individually identifiable information . . . in possession of or derived from a provider of health care, health care service plan, pharmaceutical company, or contractor regarding a patient’s medical history, mental or physical condition, or treatment.”  According to the complaint, Glow is a “provider of health care” under CMIA because it “offer[s] software to consumers that is designed to maintain medical information for the purposes of allowing users to manage their information or for the diagnosis, treatment, or management of a medical condition” (citing Cal. Civ. Code 56.06(b)).  The complaint also alleges that Glow’s privacy and security practices violated California’s Unfair Competition Law (“UCL”) and False Advertising Law (“FAL”).

The specific activities alleged to have triggered these violations of law from 2013 to 2016 include the following:

  • The Partner Connect feature “automatically granted” linking requests and “immediately shared” certain “sensitive information” without obtaining authorization from the Glow user.
  • The Partner Connect feature failed to verify the legitimacy of the person with whom the information was being shared.
  • The Glow App’s password change functionality asked for “old passwords” without authenticating such passwords on the back-end.
  • Glow’s Privacy Policy and Terms of Use made representations about the company’s privacy and security practices that were “contradicted” by Glow’s actual practices (e.g., “We have designed the Service to protect information about you from unauthorized disclosures to others.”).

 

The Settlement

The AG’s settlement with Glow (1) requires Glow to comply with relevant consumer protection and privacy laws, (2) obligates Glow to consider how “privacy or security lapses may uniquely impact women,” and (3) imposes a $250,000 civil penalty.  The settlement remains subject to court approval.  The requirements of the settlement are discussed in turn.

First, the settlement requires Glow to comply with consumer protection and privacy laws, including the CMIA.  To do so, Glow must implement an information security program “to protect the security, integrity, availability, and confidentiality” of “personal information,” “medical information,” and “sensitive personal information” that Glow “collects, stores, processes, uses, transmits, and maintains.”  “Personal information” has the meaning it is given under California’s Data Security Law (Cal. Civ. Code. 1798.81.5), and “medical information” has the meaning it is given under CMIA with the clarification that such information may be “enter[ed] or upload[ed] . . .  into a mobile application or online service” by a consumer.  “Sensitive information” refers to information that is not “medical information” or “personal information” but is individually identifiable information that describes a consumer’s “sexual activity, sexual health, and reproductive health.”

Under the settlement, Glow’s information security program is required to protect the specified categories of information by taking measures such as: (i) preventing unauthorized access, (ii) preventing unauthorized disclosure, (iii) imposing a two-step authentication process for password changes, (iv) providing annual employee training on the information security practices, (v) implementing procedures for vulnerability patching, (vi) incorporating privacy-by-design principles and security-by-design principles when creating new Glow App features, and (vii) establishing a point of contact at Glow to address security issues.

Second, the settlement requires Glow, for two years after implementing its information security program, to complete annual privacy and security risk assessments addressing Glow’s efforts to comply with applicable privacy and security laws.  The reports must be submitted to the AG’s office.

Notably, the settlement requires the privacy assessment to “(i) consider online risks that women face, or could face, including gender-based risks, as a result of privacy or security lapses while using GLOW mobile applications or online services; (ii) consider the impact of any such risks, and (iii) document GLOW’s efforts to mitigate any such risks.”  As noted, the AG’s announcement of the settlement refers to this requirement as a “first-event injunctive term” that requires a company to consider the unique impact of privacy and security lapses on women.

Third, the settlement imposes a civil penalty of $250,000.

 

Key Takeaways

The settlement highlights the sensitivity of health data, even if that data is not protected under the federal Health Insurance Portability and Accountability Act of 1996 (“HIPAA”).  Notably, the AG’s announcement asserts, “[w]hen you meet with your doctor or healthcare provider in person, you know that your sensitive information is protected.  It should be no different when you use healthcare apps over the internet.”

The Glow complaint alleges that Glow is a “provider of health care” for the purposes of CMIA because it “offer[s] software to consumers that is designed to maintain medical information for the purposes of allowing its users to manage their information or for the diagnosis, treatment, or management of a medical condition.  Specifically, the Glow app is designed for the user to store, email, and print information relating to their reproductive health such as ovulation and menstrual cycles, and/or for the diagnosis, treatment, or management of users seeking to become pregnant or treat infertility.”

The settlement also states that health information may be “medical information” for the purposes of the CMIA “irrespective of how the information is transmitted,” and thus may include information that is “manually enter[ed] or upload[ed] . . . into a mobile application or online service.”

This settlement follows other recent health and medical privacy developments in California.  In early September, the California legislatures passed AB 173 creating a new healthcare-related exemption under the California Consumer Privacy Act of 2018 (discussed here).  Although the legislature also passed SB 980, the Genetic Information Privacy Act (“GIPA”) (discussed here), Governor Gavin Newsom recently vetoed the bill.  GIPA would have imposed certain privacy and security obligations on direct-to-consumer genetic testing companies, and the Governor veto of the bill cited potential implications on research related to COVID-19.  Another recent development is the AG’s announcement of a $8.69 million settlement against Anthem Inc., resolving allegations that the health insurer violated state law and HIPAA.

AI, IoT, and CAV Legislative Update: EU Spotlight (Third Quarter 2020)

In this edition of our regular roundup on legislative initiatives related to artificial intelligence (AI), cybersecurity, the Internet of Things (IoT), and connected and autonomous vehicles (CAVs), we focus on key developments in the European Union (EU).

Continue Reading

HHS Announces Multiple HIPAA Settlements Related to Data Breaches and the Right of Access Initiative

Throughout September, the Department of Health and Human Services, Office for Civil Rights (“OCR”), announced eight different settlements to resolve a variety of alleged violations of the Privacy and Security Rules promulgated under the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”).  Notably, three settlements stem from data breaches in which hackers were able to access and obtain individuals’ protected health information (“PHI”).  In a press release for one of these settlements, OCR Director Roger Severino noted that “[h]acking is the number one source of large health care data breaches,” and failure to comply with the HIPAA Rules may render “health data a tempting target for hackers.”  In addition, OCR announced settlements with five separate providers to address potential violations of the Privacy Rule’s right of access provision.

OCR previously issued guidance waiving enforcement of certain HIPAA provisions in response to the COVID-19 pandemic, as we have discussed in earlier posts.  However, these recent settlements may indicate that OCR is starting to return to “business as usual” in the area of HIPAA enforcement.

Premera Blue Cross

On September 25, OCR announced that Premera Blue Cross (“PBC”) — the largest health plan in the Pacific Northwest, operating in Washington and Alaska — agreed to pay $6.85 million and take corrective actions as part of a settlement to resolve potential HIPAA violations arising from a data breach that affected more than 10.4 million individuals.  According to OCR, PBC’s settlement “represents the second-largest payment to resolve a HIPAA investigation in OCR history.”

PBC’s settlement relates to an incident in which hackers gained access to PBC’s IT system in May 2014 by using a phishing email to install malware.  The unauthorized access was not discovered until January 2015, almost nine months later, during which time the hackers were able to obtain the PHI of over 10.4 million individuals, including their names, addresses, dates of birth, email addresses, Social Security numbers, bank account information, and health plan clinical information.

OCR launched an investigation after PBC reported the breach in March 2015, which revealed “systemic noncompliance with the HIPAA Rules including failure to conduct an enterprise-wide risk analysis, and failures to implement risk management, and audit controls.”  According to Director Severino, PBC’s breach “vividly demonstrates the damage that results when hackers are allowed to roam undetected in a computer system for nearly nine months.”  In addition to paying $6.85 million, PBC also entered into a Corrective Action Plan (“CAP”) that requires PBC, among other things, to conduct a risk analysis to identify potential risks and vulnerabilities to its electronic PHI and to develop and implement an enterprise-wide risk management plan to mitigate any identified risks and vulnerabilities.

CHSPSC LLC

On September 23, OCR announced that it had entered into an agreement with CHSPSC LLC (“CHSPSC”), in which the company agreed to pay $2.3 million to resolve alleged HIPAA violations resulting from a 2014 data breach.  In April 2014, CHSPSC — a business associate for hospitals and clinics indirectly owned by Community Health Systems, Inc., in Tennessee — was notified by the Federal Bureau of Investigations (“FBI”) that a cyber-hacking group had been able to gain access to CHSPSC’s information systems using compromised administrative credentials.  Notwithstanding the FBI’s warning, the hackers were able to access CHSPSC’s systems and obtain the PHI of over six million individuals (including their name, sex, date of birth, phone number, Social Security number, email, ethnicity, and emergency contact information) until August 2014.

OCR’s investigation following the breach uncovered “longstanding, systemic noncompliance with the HIPAA Security Rule including failure to conduct a risk analysis, and failures to implement information system activity review, security incident procedures, and access controls.”  Director Severino stated that “[t]he failure to implement the security protections required by the HIPAA Rules, especially after being notified by the FBI of a potential breach, is inexcusable.”  Along with the monetary settlement, CHSPSC also entered into a two-year CAP.

Athens Orthopedic Clinic PA

On September 21, OCR announced that it had reached a settlement with Athens Orthopedic Clinic PA (“AOC”), in which AOC agreed to pay $1.5 million to resolve potential violations of the HIPAA Privacy and Security Rules related to a 2016 data breach.  On June 26, 2016, AOC was informed by a journalist that a database of patient records possibly belonging to AOC had been posted for sale online.  Two days later, a hacker group contacted AOC, demanding payment in return for the stolen database.  A forensic analysis ascertained that the hackers used a vendor’s credentials to access AOC’s systems.  The compromised credentials were terminated on June 27, 2016, but the hackers were not effectively blocked for almost another month.

On July 29, 2016, AOC reported that the PHI of more than 200,000 individuals (including names, dates of birth, Social Security numbers, medical procedures, test results, and health insurance information) had been disclosed through the breach.  OCR’s subsequent investigation revealed AOC’s “longstanding, systemic noncompliance with the HIPAA Privacy and Security Rules including failures to conduct a risk analysis, implement risk management and audit controls, maintain HIPAA policies and procedures, secure business associate agreements with multiple business associates, and provide HIPAA Privacy Rule training to workforce members.”  As part of the settlement agreement, AOC entered into a two-year CAP that requires revisions to its policies and procedures, particularly those related to business associates, and training for its workforce members.

HIPAA Right of Access Settlements

Additionally, OCR announced settlements of five separate investigations as part of its HIPAA Right of Access Initiative (the “Initiative”).  These settlements stem from allegations that healthcare providers failed to grant individuals access to their health records, as required by the HIPAA Privacy Rule.  See 45 C.F.R. § 164.524.  In all five cases, the providers agreed to pay various penalty amounts, ranging from $3,500 to $70,000, and take corrective actions in order to resolve allegations that they had failed to comply with the Privacy Rule’s right of access provisions.

In 2019, OCR established the Initiative as an enforcement priority focusing on individuals’ right to access their health records in a timely manner and at a reasonable cost.  According to OCR, enforcement actions under the Initiative “are designed to send a message to the health care industry about the importance and necessity of compliance with the HIPAA Rules.”  Settlement terms and monetary payments are based on numerous factors, including “the nature and extent of the potential HIPAA violation; the nature and extent of the harm resulting from the potential HIPAA violation; the entity’s history with respect to compliance with the HIPAA Rules; the financial condition of the entity, including its size and the impact of the COVID-19 public health emergency; and other matters as justice may require.”  OCR has completed seven enforcement actions, including the five September settlements, under the Initiative since it was introduced.

California Legislature Adopts CCPA Exemption for Information Deidentified in Accordance with the HIPAA Privacy Rule

In a new post on the Covington Inside Privacy blog, our colleagues discuss the passage of California’s AB 713, a bill that creates a new healthcare-related exemption under the California Consumer Privacy Act of 2018 (“CCPA”) for certain information that has been deidentified in accordance with the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”).  Importantly, this new patient-specific exemption is in addition to, and separate from, the CCPA’s current language that excludes from the scope of “personal information” certain “deidentified” information.  (As a refresh, under the CCPA, deidentified information is information that cannot reasonably identify a particular consumer, provided that the business has put in places certain safeguards and processes identified in the statute to limit risk of reidentification.)  Thus, there is now an alternative basis to argue that patient information that has been deidentified for purposes of HIPAA is also exempt from the CCPA’s obligations.  To read the post, please click here.

HHS Launches New “Health Apps” Website to Highlight HIPAA Guidance for Mobile Health Applications

On September 2, 2020, the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) announced a new “Health Apps” feature on the HHS.gov website.  The new website, which replaces the OCR’s Health App Developer Portal, highlights existing guidance for mobile health (“mHealth”) apps regarding the Health Insurance Portability and Accountability Act (“HIPAA”) regulations.

The new website features OCR’s guidance for mHealth app developers and others who may be “interested in the intersection of health information technology and HIPAA privacy and security protections,” including:

  • Mobile Health Apps Interactive Tool – A web-based tool created by OCR, in conjunction with the Federal Trade Commission and the Food and Drug Administration, to help health-related mobile app developers understand which federal laws and regulations may be applicable.
  • Health App Use Scenarios & HIPAA – Guidance illustrating when a mHealth developer may be acting as a business associate under HIPAA.
  • Access Rights, Apps, and APIs – FAQs on the HIPAA right of access, mobile apps, and application programming interfaces (APIs).
  • Health Information Technology – FAQs relating to HIPAA and various aspects of health information technology.
  • Guidance on HIPAA & Cloud Computing – Guidance to help covered entities and their business associates, including cloud services providers (“CSPs”), understand how to comply with the HIPAA requirements while using cloud computing technologies.

These new resources are OCR’s latest attempt to make the HIPAA regulations more relevant to rapidly evolving areas of health information technology.

FTC Provides Guidance on Use of AI and Algorithms

The FTC has recently provided new guidance on the use of AI and algorithms, including a blog post that focuses on the benefits and risks of AI in the “Health AI” space.  Former FTC Commissioner Terrell McSweeny, AI Initiative Co-Chair Lee Tiedrich, and Jadzia Pierce discuss the post, along with other AI-focused guidance from the FTC, in The Journal of Robotics, Artificial Intelligence and Law.

AI Standards Update: NIST Solicits Comments on the Four Principles of Explainable Artificial Intelligence and Certain Other Developments

The National Institute of Standards and Technology (“NIST”) is seeking comments on the first draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), a white paper that seeks to define the principles that capture the fundamental properties of explainable AI systems.  NIST will be accepting comments until October 15, 2020.

In February 2019, the Executive Order on Maintaining American Leadership in Artificial Intelligence directed NIST to develop a plan that would, among other objectives, “ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.”  In response, NIST issued a plan in August 2019 for prioritizing federal agency engagement in the development of AI standards, identifying seven properties that characterize trustworthy AI—accuracy, explainability, resiliency, safety, reliability, objectivity, and security.

NIST’s white paper focuses on explainability and identifies four principles underlying explainable AI.

  • Explanation. AI systems must supply evidence, support, or reasoning for their outputs.  Researchers have developed different models to explain AI systems, such as self-explainable models where the models themselves are the provided explanation.
  • Meaningful. The recipient must understand the AI system’s explanation.  This principle is a contextual requirement—for example, different types of user groups may require different explanations, or a particular user’s prior knowledge, experiences, and mental processes may affect meaningfulness.  Hence, tailoring is necessary for effective communication.
  • Explanation Accuracy. The explanation must correctly reflect the AI system’s process for generating its output.  In contrast to decision accuracy, explanation accuracy is not concerned with whether or not the system’s judgment is correct.  It is referencing how the system came to its conclusion.  The principle is also contextual—there may be different explanation accuracy metrics for different types of groups and users.
  • Knowledge Limits. The AI system must identify cases it was not designed or approved to operate, or where its answers are not reliable.  This ensures that reliance on AI system’s decision processes occurs only where it is appropriate.

The white paper states that explanations generally can be described along two dimensions: the amount of time the consumer has to respond to the information and the level of detail in the explanation.  Although flexibility in the range and types of explanations will be necessary, NIST provides a non-exhaustive list of explanation categories, drawing from academic literature:

  • User Benefit.  This type of explanation is designed to inform a user about an AI system output, such as providing the reason a loan application was approved or denied to the applicant.
  • Societal Acceptance.  This type of explanation is designed to generate trust and acceptance by society, to provide an increased sense of comfort in the system.
  • Regulatory and Compliance.  This type of explanation assists with audits for compliance with regulations, standards, and legal requirements, such as providing detailed explanation to a safety regulator to evaluate the output of self-driving cars.
  • System Development.  This type of explanation assists with developing, improving, debugging, or maintaining an AI system by technical staff and product managers.
  • Owner Benefit.  This type of explanation benefits the operator of a system, such as a recommendation system that lists movies to watch and explains the selection based on previously viewed items.

After explaining the core concepts of explainable AI systems, NIST explores the explainability of human decision processes.  NIST states that humans demonstrate only a limited ability to meet the four principles described above, which provides a benchmark to evaluate explainable AI systems and informs the development of reasonable metrics.  According to NIST, evaluating explainability in context of human decision-making also may lead to better understanding of human-machine collaboration and interfaces.

Although the white paper does not provide detailed guidance for organizations implementing AI systems, it represents an important step by NIST to develop trustworthy AI tools.  Documents from other jurisdictions on explaining AI provide more detailed guidance aimed at helping organizations operationalize the concept of explainable AI.  The UK Information Commissioner’s Office (“ICO”), for example, issued on May 20, 2020 its final guidance on explaining decisions made with AI.  Similar to the NIST white paper, the ICO recognizes that there are different underlying principles to be followed and different models of AI explanation.  The ICO takes these principles one step further, however, and provides more detailed guidance on how to explain AI in practice, depending on the type of AI system used.

Some Legislative Developments Relating to NIST

Efforts to advance the development of AI standards through NIST has been a topic of increasing focus in Congress.  Recent bills include Sen. Cory Gardner’s (R-CO) Advancing Artificial Intelligence Research Act of 2020, which would appropriate $250 million to NIST for each of fiscal years 2021 through 2025 for the creation of a national program to advance AI research, and Rep. Eddie Bernice Johnson’s (D-TX-30) National Artificial Intelligence Initiative Act of 2020, which would appropriate over $50 million to NIST for each of fiscal years 2021 through 2025 for the research and development of voluntary standards for trustworthy AI systems, among other activities.  The House Appropriations Committee also released the draft fiscal year 2021 Commerce, Justice, Science, and Related Agencies funding bill, where $789 million is included for core NIST research activities, an increase of $35 million above the FY 2020 enacted level.

To learn more about AI, please access our AI Toolkit.

FDA Issues Final Guidance on Multiple Function Digital Health Software and Other Devices

On July 28, 2020, FDA announced the publication of a final guidance on Multiple Function Device Products: Policy and Considerations that outlines FDA’s evolving approach to the regulation of multiple function device products, including software.

The concept of “multiple function” products was introduced by the 21st Century Cures Act (“Cures Act”) of 2016, which added section 520(o) to the FD&C Act.  Multiple function device products are those with multiple functions that each have a distinct purpose in the product (e.g., collection, storage, analysis) where only certain functions are actively regulated by FDA.  With regard to software, section 520(o) of the FD&C Act gives FDA the authority to review the non-device function(s) of a multiple function device product to assess the impact of the non-device function(s) on the device function(s).

Here are the key takeaways on FDA’s newly-issued final guidance:

  • While the Cures Act language distinguishes device functions from non-device functions, FDA adopts a final policy that distinguishes between device functions and other FDA says that “other functions” include not only non-device functions, but also device functions that are exempt from premarket review (i.e., 510(k)-exempt), as well as device functions that fall within FDA’s exercise of enforcement discretion.
  • The same approach should apply to FDA’s assessment of all multiple function device products, whether software, hardware or both.
  • For multiple function device products, manufacturers should perform impact assessments for all “other functions” to assess any effects of the other functions on the device functions of the product – reaching a conclusion of no impact, positive impact, or negative impact.  These assessments should be documented as part of the device’s design validation process.  In the event that an impact is found, the extent of the impact should be evaluated and included in the manufacturer’s hazard analysis.
  • FDA expects that impact assessments be included as part of a premarket submission when there is a (i) negative impact or (ii) positive impact that the manufacturer seeks to include in the product’s labeling.  For a finding of no impact or a positive impact that the manufacturer does not seek to include in the product’s labeling, FDA does not expect to see the impact assessment as part of the premarket submission, although FDA may review the documentation as part of an inspection.
  • FDA broadly defines a potential “negative” impact of an “other function” on the device function(s).  Thus, as a practical matter, it is likely that it will be difficult for a manufacturer to conclude that the “other functions” have no impact on the device function.  We anticipate that manufacturers will need to submit impact assessments for a large number of multiple function device products.
  • One open question is the possible impact of FDA determining that a company failed to submit an impact assessment that, in FDA’s view, was required as a part of the premarket review of a multiple function device product.  For example, if a company makes a good faith determination that a non-device function has no impact on the device functions of a multiple function device product, and submits a 510(k) or PMA without an impact assessment, but FDA later disagrees with that determination, would the agency take the position that the 510(k)/PMA was ineffective and not properly obtained?  Similarly, would FDA exercise enforcement discretion in such a situation to allow the company to keep the product on the market while it submits the impact assessment and other documentation associated with the non-device functions?

To help companies navigate these issues, FDA’s guidance provides several case studies of multiple function device products and what FDA would expect to see as part of a premarket submission for the device functions.  Companies developing multiple function device products will want to ensure that they consider all aspects of the FDA’s final guidance.

UK ICO publishes guidance on Artificial Intelligence

On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AItrade-offs, and bias and discrimination, all covered in Covington blogs).

Continue Reading

LexBlog