Artificial Intelligence (AI) has played an important role in battling COVID-19 since the initial outbreak: HealthMap – an AI tool from Boston Children’s Hospital that scans news reports, social media, and other data for signs of disease outbreaks – first sounded the international alarm after picking up reports of an emerging virus in Wuhan, China.

NHSX recently published “A Buyer’s Checklist for AI in Health and Care” (Guidance) that sets out 10 key questions which will be of use to parties deploying AI solutions or conducting data driven projects (in a health and care setting or otherwise).  For example, the Guidance highlights:

  • key data-related considerations, such

Lee TiedrichB.J. Altvater, and James Yoon recently published an article summarizing recent developments in artificial intelligence law and policy on the University of Pennsylvania Law School’s Regulatory Review.  The article primarily focuses on developments in the United States, including the National Artificial Intelligence Initiative Act introduced by members of the House Committee

The COVID-19 crisis is demonstrating the potential of digital health technology to manage some of our greatest public health challenges.  The White House Office of Science and Technology Policy has issued a call to action for technology companies to help the science community answer high-priority scientific questions related to COVID-19.  The Centers for Disease Control

On February 10, 2020, the UK Government’s Committee on Standards in Public Life* (the “Committee”) published its Report on Artificial Intelligence and Public Standards (the “Report”). The Report examines potential opportunities and hurdles in the deployment of AI in the public sector, including how such deployment may implicate the “Seven Principles of Public Life” applicable to holders of public office, also known as the “Nolan Principles” (available here). It also sets out practical recommendations for use of AI in public services, which will be of interest to companies supplying AI technologies to the public sector (including the UK National Health Service (“NHS”)), or offering public services directly to UK citizens on behalf of the UK Government. The Report elaborates on the UK Government’s June 2019 Guide to using AI in the public sector (see our previous blog here).

Continue Reading UK Government’s Advisory Committee Publishes Report on Public Sector Use of AI

In this final instalment of our series of blogs on the European Commission’s plans for AI and data, announced on 19 February 2020, we discuss some potential effects on companies in the digital health sector. As discussed in our previous blog posts (here, here and here), the papers published by the European Commission cover broad concepts and apply generally — but, in places, they specifically mention healthcare and medical devices.

The Commission recognizes the important role that AI and big data analysis can play in improving healthcare, but also notes the specific risks that could arise given the effects that such new technologies may have on individuals’ health, safety, and fundamental rights. The Commission also notes that existing EU legislation already affords a high level of protection for individuals, including through medical devices laws and data protection laws. The Commission’s proposals therefore focus on addressing the gap between these existing rules and the residual risks that remain in respect of new technologies. Note that the Commission’s proposals in the White Paper on AI are open for public consultation until 19 May 2020.


Continue Reading European Commission’s Plans for AI and Data: Focus on Digital Health (Part 4 of 4)

On 19 February 2020, the new European Commission published two Communications relating to its five-year digital strategy: one on shaping Europe’s digital future, and one on its European strategy for data (the Commission also published a white paper proposing its strategy on AI; see our previous blogs here and here).  In both Communications, the Commission sets out a vision of the EU powered by digital solutions that are strongly rooted in European values and EU fundamental rights.  Both Communications also emphasize the intent to strengthen “European technological sovereignty”, which in the Commission’s view will enable the EU to define its own rules and values in the digital age.  The Communications set out the Commission’s plans to achieve this vision.

Continue Reading European Commission’s plans on data and Europe’s digital future (Part 3 of 4)

The Institute of Global Health Innovation at Imperial College London has published a report called “NHS data: Maximising its impact on the health and wealth of the United Kingdom” (the “Report”).[1] The Report begins from the premise that the knowledge gleaned from the combination of patient health data and “big data” technologies has incredible potential for “transformative …impact” on patient health, scientific advancement and the UK’s economy. However, the Report argues that the current efforts of scientists, medical professionals and the UK government to develop the UK’s capacities are not sufficiently coordinated to maximise that potential. To address this, the Report presents a single, high-level, strategic framework for the collection, governance and use of patient health data in the NHS.
Continue Reading New Report Recommends Putting Public Engagement at the Heart of NHS Health Data Strategy

The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI.  The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.  Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.

The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’.  (Previous blog posts that track this issue are available here.)  Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems).  Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.

Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020.  A final version is expected to be published later next year.


Continue Reading UK ICO and The Alan Turing Institute Issue Draft Guidance on Explaining Decisions Made by AI

Following on from the Evidence Standards Framework for DHTs published in December 2018 (the Original Standards, as reported in our previous blog post, here), the UK’s National Institute for Health and Care Excellence (NICE) recently published a newly updated version of the standards (the Updated Standards, available here).

The