Artificial Intelligence (AI) has played an important role in battling COVID-19 since the initial outbreak: HealthMap – an AI tool from Boston Children’s Hospital that scans news reports, social media, and other data for signs of disease outbreaks – first sounded the international alarm after picking up reports of an emerging virus in Wuhan, China.
NHSX recently published “A Buyer’s Checklist for AI in Health and Care” (Guidance) that sets out 10 key questions which will be of use to parties deploying AI solutions or conducting data driven projects (in a health and care setting or otherwise). For example, the Guidance highlights:
- key data-related considerations, such
Artificial intelligence (AI) is burgeoning in many industries, including healthcare. As the legal landscape around AI continues to evolve, how can organizations operationalize trustworthy AI applications? Lee Tiedrich and Lala R. Qadir share ten steps in this article with Law360.
For more information about AI, please see our “AI Toolkit.”
Lee Tiedrich, B.J. Altvater, and James Yoon recently published an article summarizing recent developments in artificial intelligence law and policy on the University of Pennsylvania Law School’s Regulatory Review. The article primarily focuses on developments in the United States, including the National Artificial Intelligence Initiative Act introduced by members of the House Committee…
As we anticipated in a previous blog post, on April 22, 2020, the European Data Protection Board (“EDPB”) issued new guidelines on the use of location data and contact tracing apps in the context of the present COVID-19 pandemic.
The EDPB’s close scrutiny over the use of mobile data and apps in the context of the ongoing public health crisis is unsurprising, as many EU Member States have launched—or are in the process of launching—contact tracing apps to fight the spread of the virus, and these initiatives are receiving great attention by data privacy authorities and the general public (see our blog post here).
The guidelines aim to clarify the data protection conditions and principles that should be followed when:
- using location data to model the spread of the virus to assess the overall effectiveness of confinement measures; and
- using contact tracing apps, which aim to notify individuals who may have been in close proximity to someone who is infected or confirmed as a carrier of the virus, in order to break the contamination chain as early as possible.
The EDPB stresses that EU data protection rules have been designed to be flexible and, as such, do not stand in the way of an efficient response to the pandemic. However, it notes that governments and private actors should be mindful of a number of considerations when they use data-driven solutions in response to the COVID-19 outbreak.
On April 8, 2020, the Federal Trade Commission (“FTC”) released a blog post about the use of artificial intelligence (“AI”) and algorithms in automated decisionmaking. The blog highlighted the potentially great benefits and risks presented by increasingly sophisticated technologies, particularly in the “Health AI” space. However, it also emphasized that automated decisionmaking is not a…
The rapid spread of COVID-19, along with the effectiveness of existing public health response plans and the impacts of social distancing on the economy, have raised the question of how new technology can be used to address and manage the pandemic. On April 1, 2020, the Stanford Institute for Human-Centered Artificial Intelligence hosted “COVID-19 and AI: A Virtual Conference” to explore the potential applications of artificial intelligence (“AI”) in diagnostics and treatment, epidemiological tracking and forecasting of the spread of COVID-19, and the pandemic’s impacts on the economy, culture, and human behavior.
Continue Reading AI Update: Using Artificial Intelligence to Combat COVID-19
The COVID-19 crisis is demonstrating the potential of digital health technology to manage some of our greatest public health challenges. The White House Office of Science and Technology Policy has issued a call to action for technology companies to help the science community answer high-priority scientific questions related to COVID-19. The Centers for Disease Control…
On February 10, 2020, the UK Government’s Committee on Standards in Public Life* (the “Committee”) published its Report on Artificial Intelligence and Public Standards (the “Report”). The Report examines potential opportunities and hurdles in the deployment of AI in the public sector, including how such deployment may implicate the “Seven Principles of Public Life” applicable to holders of public office, also known as the “Nolan Principles” (available here). It also sets out practical recommendations for use of AI in public services, which will be of interest to companies supplying AI technologies to the public sector (including the UK National Health Service (“NHS”)), or offering public services directly to UK citizens on behalf of the UK Government. The Report elaborates on the UK Government’s June 2019 Guide to using AI in the public sector (see our previous blog here).
Continue Reading UK Government’s Advisory Committee Publishes Report on Public Sector Use of AI
In this final instalment of our series of blogs on the European Commission’s plans for AI and data, announced on 19 February 2020, we discuss some potential effects on companies in the digital health sector. As discussed in our previous blog posts (here, here and here), the papers published by the European Commission cover broad concepts and apply generally — but, in places, they specifically mention healthcare and medical devices.
The Commission recognizes the important role that AI and big data analysis can play in improving healthcare, but also notes the specific risks that could arise given the effects that such new technologies may have on individuals’ health, safety, and fundamental rights. The Commission also notes that existing EU legislation already affords a high level of protection for individuals, including through medical devices laws and data protection laws. The Commission’s proposals therefore focus on addressing the gap between these existing rules and the residual risks that remain in respect of new technologies. Note that the Commission’s proposals in the White Paper on AI are open for public consultation until 19 May 2020.