Digital health technologies, including algorithms for use in health care, are being developed to aid healthcare providers and serve patients, from use with administrative tasks and workflow to diagnostic and decision support.  The use of artificial intelligence (“AI”) and machine learning algorithms in health care holds great promise, with the ability to help streamline care and improve patient outcomes.  At the same time, algorithms can introduce bias if they are developed and trained using data from historical datasets that harbor existing prejudices.  Both state and federal governments have taken steps to address the potential for racial and ethnic disparities in use of algorithms by healthcare facilities, demonstrating that this continues to be a top priority as new technologies are deployed in health care.

California Attorney General Rob Bonta recently sent letters to 30 hospital CEOs across the state requesting information about how healthcare facilities and other providers are identifying and addressing racial and ethnic disparities in software they use to help make decisions about patient care or hospital administration.  The press release stressed the importance of identifying and combatting racial health disparities in healthcare algorithms, and the AG’s letter seeks information such as a list of all decision-making tools or algorithms the hospitals use for clinical decision support, health management, operational optimization, or payment management; the purposes for which these tools are currently used and how they inform decisions; and the names of the persons responsible for ensuring they do not have a disparate impact based on race.  Responses are due to the AG by October 15. 

The federal government also has made disparities in health care a top priority.  For example, the Department of Health and Human Services (HHS) recently issued a proposed rule regarding nondiscrimination in health programs and activities.  Amongst other proposals aimed at combatting discrimination, HHS proposed provisions related to nondiscrimination in the use of clinical algorithms in healthcare decision-making and in telehealth services.  Proposed § 92.210 states that “a covered entity must not discriminate against any individual on the basis of race, color, national origin, sex, age, or disability through the use of clinical algorithms in its decision-making.”  The proposed rule notes that a covered entity would not be liable for clinical algorithms they did not develop, but HHS proposes to impose liability for any decisions made in reliance on clinical algorithms if they rest upon or result in discrimination.  The proposed rule noted that the Department “believes it is critical to address this issue explicitly in this rulemaking given recent research demonstrating the prevalence of clinical algorithms that may result in discrimination.”  Comments are due to HHS by October 3.  HHS specifically seeks input on whether the provision should include additional forms of automated decision-making tools beyond clinical algorithms; whether the provision should include potential actions covered entities should take to mitigate discriminatory outcomes; and recommendations on how to identify and mitigate discrimination resulting from the usage of clinical algorithms.

These state and federal actions, as well as the associated responses, could inform ongoing dialogue about how to advance the use of digital health technologies while in parallel making progress to address inequities in health care.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Rujul Desai Rujul Desai

Rujul Desai advises clients on drug pricing, market access, reimbursement, strategic contracting, and regulatory solutions for drugs, biologicals, devices, and diagnostics. He brings deep experience with biopharma, specialty pharmacy, and pharmacy benefit management (PBM) companies.

Rujul has held a number of leadership roles…

Rujul Desai advises clients on drug pricing, market access, reimbursement, strategic contracting, and regulatory solutions for drugs, biologicals, devices, and diagnostics. He brings deep experience with biopharma, specialty pharmacy, and pharmacy benefit management (PBM) companies.

Rujul has held a number of leadership roles in the biopharma, PBM, and specialty pharmacy industry, including with CVS Caremark, UCB, and most recently as Vice President at Avalere Health. He has led engagements across a wide range of U.S. and global market access and reimbursement issues, including optimizing new product launches, pricing, PBM and payer formulary access, value-based contracting, distribution network design, patient access and hub services, affordability programs, e-prescribing, digital health, and the use of health economic data and modeling.

Photo of Libbie Canter Libbie Canter

Libbie Canter represents a wide variety of multinational companies on privacy, cyber security, and technology transaction issues, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with global privacy laws. She routinely supports…

Libbie Canter represents a wide variety of multinational companies on privacy, cyber security, and technology transaction issues, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with global privacy laws. She routinely supports clients on their efforts to launch new products and services involving emerging technologies, and she has assisted dozens of clients with their efforts to prepare for and comply with federal and state privacy laws, including the California Consumer Privacy Act and California Privacy Rights Act.

Libbie represents clients across industries, but she also has deep expertise in advising clients in highly-regulated sectors, including financial services and digital health companies. She counsels these companies — and their technology and advertising partners — on how to address legacy regulatory issues and the cutting edge issues that have emerged with industry innovations and data collaborations.

Photo of Olivia Dworkin Olivia Dworkin

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance. With a focus on cutting-edge…

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance. With a focus on cutting-edge medical technologies and digital health products and services, Olivia regularly helps new and established companies navigate a variety of state and federal regulatory, legislative, and compliance matters throughout the total product lifecycle. She has experience counseling clients on the development, FDA regulatory classification, and commercialization of digital health tools, including clinical decision support software, mobile medical applications, general wellness products, medical device data systems, administrative support software, and products that incorporate artificial intelligence, machine learning, and other emerging technologies.

Olivia also assists clients in advocating for legislative and regulatory policies that will support innovation and the safe deployment of digital health tools, including by drafting comments on proposed legislation, frameworks, whitepapers, and guidance documents. Olivia keeps close to the evolving regulatory landscape and is a frequent contributor to Covington’s Digital Health blog. Her work also has been featured in the Journal of Robotics, Artificial Intelligence & Law, Law360, and the Michigan Journal of Law and Mobility.