On 13 August 2019, the European Commission opened a call for expression of interest to relaunch the eHealth Stakeholder Group with a view to supporting the “digital transformation of healthcare in the EU”. The eHealth Stakeholder Group was first launched in 2012 and in its first iteration (between 2012 and
Continue Reading Commission relaunch of eHealth Stakeholder Group
Artificial Intelligence (AI)
ICO publishes blog post on AI and trade-offs between data protection principles
On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”). The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions. The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.
The ICO invites organizations with experience of considering these complex issues to provide their views. This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI. See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.Continue Reading ICO publishes blog post on AI and trade-offs between data protection principles
ICO’s Call for Input on Bias and Discrimination in AI systems
On June 25, 2019, as part of their continuing work on the AI Auditing Framework, the UK Information Commissioner’s Office (ICO) published a blog setting out their views on human bias and discrimination in AI systems. The ICO has also called for input on specific questions relating to human…
Continue Reading ICO’s Call for Input on Bias and Discrimination in AI systems
UK Government’s Guide to Using AI in the Public Sector
On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”). The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.
The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems. Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.
The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.Continue Reading UK Government’s Guide to Using AI in the Public Sector
ICO’s Interim Report on Explaining AI
On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organisations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.
Continue Reading ICO’s Interim Report on Explaining AI
FDA Outlines Proposed Framework for Regulating Artificial Intelligence Software
On April 2, 2019, FDA released a discussion paper entitled “Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” (the “AI Framework”). The AI Framework is the Agency’s first policy document describing a potential regulatory approach for medical devices that use artificial intelligence (“AI”) and machine learning (“ML”). The AI Framework does not establish new requirements or an official policy, but rather was released by FDA to seek early input prior to the development of a draft guidance. FDA acknowledges that the approach “may require additional statutory authority to implement fully.”
In an accompanying press release, former FDA Commissioner Scott Gottlieb outlined the need for a “more tailored” regulatory paradigm for algorithms that learn and adapt in the real world. FDA’s medical device regulation scheme was not designed for dynamic machine learning algorithms, as the Agency traditionally encounters products that are static at the time of FDA review. The AI Framework is FDA’s attempt to develop “an appropriate framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet [FDA’s] gold standard for safety and effectiveness throughout the product’s lifecycle.”
Continue Reading FDA Outlines Proposed Framework for Regulating Artificial Intelligence Software
EU High-Level Working Group Publishes Ethics Guidelines for Trustworthy AI
On 8 April 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”). This follows a stakeholder consultation on its draft guidelines published December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance). The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.
According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III. The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase.Continue Reading EU High-Level Working Group Publishes Ethics Guidelines for Trustworthy AI
ICO opens beta phase of privacy “regulatory sandbox”
On 29 March 2019, the ICO opened the beta phase of the “regulatory sandbox” scheme (the “Sandbox”), which is a new service designed to support organizations that are developing innovative and beneficial projects that use personal data. The application process for participating in the Sandbox is now open, and applications…
Continue Reading ICO opens beta phase of privacy “regulatory sandbox”
EU Working Group Publishes Draft Guidance on AI Ethics
On 18 December 2018, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published new draft guidance on “AI Ethics” (the “guidance”). The AI HLEG is a European Commission-backed working group made up of representatives from industry, academia and NGOs, and was formed as part of the Commission’s ongoing work to develop EU policy responses to the development, challenges and new opportunities posed by AI technologies. Stakeholders are invited to comment on the draft through the European AI Alliance before it is finalized in March 2019.
The guidance recognizes the potential benefits of AI technologies for Europe, but also stresses that AI must be developed and implemented with a “human-centric approach” that results in “Trustworthy AI”. The guidance then explains in detail the concept of “Trustworthy AI” and the issues stakeholders should navigate in order to achieve it. A more detailed summary of the guidance is set out below.
This guidance is not binding, but it is likely to influence EU policymakers as they consider whether and how to legislate in the AI space going forwards. AI HLEG also envisages that the final version of the guidance in March 2019 will include a mechanism to allow stakeholders to voluntarily endorse its principles. The guidance also states that the AI HLEG will consider making legislative recommendations in its separate deliverable on “Policy & Investment Recommendations,” due May 2019.Continue Reading EU Working Group Publishes Draft Guidance on AI Ethics
Key Takeaways from FDA’s Framework for Real-World Evidence for Pharmaceuticals
On December 7, FDA published the much-anticipated “Framework for FDA’s Real-World Evidence Program” for drugs and biological products (the “Framework”). In a statement announcing the Framework, Commissioner Gottlieb recognized the opportunities and challenges of using real-world data (“RWD”) and real-world evidence (“RWE”) to enhance regulatory decision-making and noted that leveraging this information is “a top strategic priority for the FDA.” FDA opened a docket for public comments on the Framework through February 5, 2019.
The Framework focuses in particular on the use of RWE to support regulatory decisions about effectiveness. The agency outlines three considerations that will guide its overall RWE Program and inform the agency’s assessment of individual drug applications. The Framework also offers background on the agency’s previous use and current initiatives with respect to RWE and related topics, such as innovative clinical trial designs. This blog post provides an overview of FDA’s proposal and highlights a few initial takeaways noted by Covington’s Digital Health team.Continue Reading Key Takeaways from FDA’s Framework for Real-World Evidence for Pharmaceuticals