Artificial Intelligence (AI)

On 19 September 2019, the European Parliamentary Research Service (“EPRS”)—the European Parliament’s in-house research service—released a briefing paper that summarizes the current status of the EU’s approach to developing a regulatory framework for ethical AI.  Although not a policymaking body, the EPRS can provide useful insights into the direction of EU policy on an issue.  The paper summarises recent calls in the EU for adopting legally binding instruments to regulate AI, in particular to set common rules on AI transparency, set common requirements for fundamental rights impact assessments, and provide an adequate legal framework for facial recognition technology.

The briefing paper follows publication of the European Commission’s high-level expert group’s Ethics Guidelines for Trustworthy Artificial Intelligence (the “Guidelines”), and the announcement by incoming Commission President Ursula von der Leyen that she will put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within her first 100 days in office.Continue Reading European Parliamentary Research Service issues a briefing paper on implementing EU’s ethical guidelines on AI

On 13 August 2019, the European Commission opened a call for expression of interest to relaunch the eHealth Stakeholder Group with a view to supporting the “digital transformation of healthcare in the EU”. The eHealth Stakeholder Group was first launched in 2012 and in its first iteration (between 2012 and 2015), contributed to the development

On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”).  The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions.  The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.

The ICO invites organizations with experience of considering these complex issues to provide their views.  This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI.  See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.Continue Reading ICO publishes blog post on AI and trade-offs between data protection principles

On June 25, 2019, as part of their continuing work on the AI Auditing Framework, the UK Information Commissioner’s Office (ICO) published a blog setting out their views on human bias and discrimination in AI systems. The ICO has also called for input on specific questions relating to human bias and discrimination, set out

On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”).  The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.

The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems.  Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.

The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.Continue Reading UK Government’s Guide to Using AI in the Public Sector

On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organisations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.
Continue Reading ICO’s Interim Report on Explaining AI

On April 2, 2019, FDA released a discussion paper entitled “Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” (the “AI Framework”). The AI Framework is the Agency’s first policy document describing a potential regulatory approach for medical devices that use artificial intelligence (“AI”) and machine learning (“ML”). The AI Framework does not establish new requirements or an official policy, but rather was released by FDA to seek early input prior to the development of a draft guidance. FDA acknowledges that the approach “may require additional statutory authority to implement fully.”

In an accompanying press release, former FDA Commissioner Scott Gottlieb outlined the need for a “more tailored” regulatory paradigm for algorithms that learn and adapt in the real world. FDA’s medical device regulation scheme was not designed for dynamic machine learning algorithms, as the Agency traditionally encounters products that are static at the time of FDA review. The AI Framework is FDA’s attempt to develop “an appropriate framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet [FDA’s] gold standard for safety and effectiveness throughout the product’s lifecycle.”
Continue Reading FDA Outlines Proposed Framework for Regulating Artificial Intelligence Software

On 8 April 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase.Continue Reading EU High-Level Working Group Publishes Ethics Guidelines for Trustworthy AI

On December 7, FDA published the much-anticipated “Framework for FDA’s Real-World Evidence Program” for drugs and biological products (the “Framework”).  In a statement announcing the Framework, Commissioner Gottlieb recognized the opportunities and challenges of using real-world data (“RWD”) and real-world evidence (“RWE”) to enhance regulatory decision-making and noted that leveraging this information is “a top strategic priority for the FDA.”  FDA opened a docket for public comments on the Framework through February 5, 2019.

The Framework focuses in particular on the use of RWE to support regulatory decisions about effectiveness.  The agency outlines three considerations that will guide its overall RWE Program and inform the agency’s assessment of individual drug applications.  The Framework also offers background on the agency’s previous use and current initiatives with respect to RWE and related topics, such as innovative clinical trial designs.  This blog post provides an overview of FDA’s proposal and highlights a few initial takeaways noted by Covington’s Digital Health team.Continue Reading Key Takeaways from FDA’s Framework for Real-World Evidence for Pharmaceuticals

Designing data-driven products and services in compliance with privacy requirements can be a challenging process.  Technological innovation enables novel uses of personal data, and companies designing new data-driven products must navigate new, untested, and sometimes unclear requirements of privacy laws, including the General Data Protection Regulation (GDPR).  These challenges are often particularly acute for companies providing products and services leveraging artificial intelligence technologies, or operating with sensitive personal data, such as digital health products and services.

Recognising some of the above challenges, the Information Commissioner’s Office (ICO) has commenced a consultation on establishing a “regulatory sandbox”.  The first stage is a survey to gather market views on how such a regulatory sandbox may work (Survey).  Interested organisations have until 12 October to reply.

The key feature of the regulatory sandbox is to allow companies to test ideas, services and business models without risk of enforcement and in a manner that facilitates greater engagement between industry and the ICO as new products and services are being developed.

The regulatory sandbox model has been deployed in other areas, particularly in the financial services sector (see here), including by the Financial Conduct Authority in the UK (see here).

Potential benefits of the regulatory sandbox include reducing regulatory uncertainty, enabling more products to be brought to market, and reducing the time of doing so, while ensuring appropriate protections are in place (see the FCA’s report on its regulatory sandbox here for the impact it has had on the financial services sector, including lessons learned).

The ICO indicated earlier this year that it intends to launch the regulatory sandbox in 2019 and will focus on AI applications (see here).

Further details on the scope of the Survey are summarised below.Continue Reading ICO consults on privacy “regulatory sandbox”