Artificial Intelligence (AI)

On December 12, 2019, the European Parliament endorsed a non-binding resolution on enabling the digital transformation of health and care. The resolution calls on the European Commission to take a number of actions to foster the development of digital health systems in Europe to improve patient care and support research efforts — particularly those using innovative technologies such as AI.

Continue Reading European Parliament Endorses Digital Health Resolution

The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI.  The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.  Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.

The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’.  (Previous blog posts that track this issue are available here.)  Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems).  Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.

Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020.  A final version is expected to be published later next year.

Continue Reading UK ICO and The Alan Turing Institute Issue Draft Guidance on Explaining Decisions Made by AI

On September 26, 2019, the FDA issued two revised guidance documents addressing its evolving approach to the regulation of digital health technologies. These guidances primarily describe when digital health solutions will or will not be actively regulated by FDA as a medical device. In parallel, FDA also updated four previously final guidance documents to ensure alignment with the new approaches being adopted by the Agency.

As background, FDA issued draft guidance documents in December 2017 that sought to implement section 520(o)(1) of the Federal Food, Drug, and Cosmetic Act (“FDCA”), which was enacted by Congress in the 21st Century Cures Act of 2016 (the “Cures Act”). Those guidance documents raised a number of issues that we discussed on this previous alert.

After receiving comments from stakeholders, the Agency responded by issuing: (i) a revised draft guidance document for clinical decision support (CDS) software (“Clinical and Patient Decision Support Software” or the “CDS Draft Guidance”) and (ii) a final guidance document for other software functions exempted by the Cures Act (“Changes to Existing Medical Software Policies Resulting from Section 3060 of the 21st Century Cures Act” or the “Software Policies Guidance”).

Here are key takeaways on FDA’s newly-issued guidance:
Continue Reading FDA Issues Updated Guidance on the Regulation of Digital Health Technologies

On 19 September 2019, the European Parliamentary Research Service (“EPRS”)—the European Parliament’s in-house research service—released a briefing paper that summarizes the current status of the EU’s approach to developing a regulatory framework for ethical AI.  Although not a policymaking body, the EPRS can provide useful insights into the direction of EU policy on an issue.  The paper summarises recent calls in the EU for adopting legally binding instruments to regulate AI, in particular to set common rules on AI transparency, set common requirements for fundamental rights impact assessments, and provide an adequate legal framework for facial recognition technology.

The briefing paper follows publication of the European Commission’s high-level expert group’s Ethics Guidelines for Trustworthy Artificial Intelligence (the “Guidelines”), and the announcement by incoming Commission President Ursula von der Leyen that she will put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within her first 100 days in office.

Continue Reading European Parliamentary Research Service issues a briefing paper on implementing EU’s ethical guidelines on AI

On 13 August 2019, the European Commission opened a call for expression of interest to relaunch the eHealth Stakeholder Group with a view to supporting the “digital transformation of healthcare in the EU”. The eHealth Stakeholder Group was first launched in 2012 and in its first iteration (between 2012 and 2015), contributed to the development

On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”).  The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions.  The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.

The ICO invites organizations with experience of considering these complex issues to provide their views.  This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI.  See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.

Continue Reading ICO publishes blog post on AI and trade-offs between data protection principles

On June 25, 2019, as part of their continuing work on the AI Auditing Framework, the UK Information Commissioner’s Office (ICO) published a blog setting out their views on human bias and discrimination in AI systems. The ICO has also called for input on specific questions relating to human bias and discrimination, set out

On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”).  The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.

The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems.  Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.

The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.

Continue Reading UK Government’s Guide to Using AI in the Public Sector

On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organisations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.
Continue Reading ICO’s Interim Report on Explaining AI

On April 2, 2019, FDA released a discussion paper entitled “Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” (the “AI Framework”). The AI Framework is the Agency’s first policy document describing a potential regulatory approach for medical devices that use artificial intelligence (“AI”) and machine learning (“ML”). The AI Framework does not establish new requirements or an official policy, but rather was released by FDA to seek early input prior to the development of a draft guidance. FDA acknowledges that the approach “may require additional statutory authority to implement fully.”

In an accompanying press release, former FDA Commissioner Scott Gottlieb outlined the need for a “more tailored” regulatory paradigm for algorithms that learn and adapt in the real world. FDA’s medical device regulation scheme was not designed for dynamic machine learning algorithms, as the Agency traditionally encounters products that are static at the time of FDA review. The AI Framework is FDA’s attempt to develop “an appropriate framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet [FDA’s] gold standard for safety and effectiveness throughout the product’s lifecycle.”
Continue Reading FDA Outlines Proposed Framework for Regulating Artificial Intelligence Software