Skip to content

Artificial Intelligence (AI)

The International Coalition of Medicines Regulatory Authorities (“ICMRA”) has published a report on the use of artificial intelligence (“AI”) to develop medicines (the “AI Report”) that provides a series of recommendations on how regulators and stakeholders can address challenges posed by AI.  The ICMRA notes that there are numerous opportunities to apply AI to medicines development, but that AI poses a number of challenges to existing regulatory frameworks.  The AI Report discusses these opportunities and challenges in detail based on several case studies, and provides a set of recommendations for implementation by the ICMRA and its member authorities, which includes the European Medicines Agency (the “EMA”), the USA’s Food and Drug Administration, and the World Health Organisation.  Based on the AI report, we expect to see an increased focus on adapting regulatory frameworks to deal with AI products going forwards both on an international and national level.

Continue Reading ICMRA Publishes Report and Recommendations on AI and Medicinal Products

In April 2021, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Regulation”), which would establish rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU. The proposal, comprising 85 articles and nine annexes, is part of a wider package of Commission initiatives aimed at positioning the EU as a world leader in trustworthy and ethical AI and technological innovation.

The Commission’s objectives with the Regulation are twofold: to promote the development of AI technologies and harness their potential benefits, while also protecting individuals against potential threats to their health, safety, and fundamental rights posed by AI systems. To that end, the Commission proposal focuses primarily on AI systems identified as “high-risk,” but also prohibits three AI practices and imposes transparency obligations on providers of certain non-high-risk AI systems as well. Notably, it would impose significant administrative costs on high-risk AI systems of around 10 percent of the underlying value, based on compliance, oversight, and verification costs. This blog highlights several key aspects of the proposal.
Continue Reading European Commission Proposes New Artificial Intelligence Regulation

On January 6, 2021, the UK’s AI Council (an independent government advisory body) published its AI Roadmap (“Roadmap”). In addition to calling for a  Public Interest Data Bill to ‘protect against automation and collective harms’, the Roadmap acknowledges the need to counteract public suspicion of AI and makes 16 recommendations, based on three main pillars, to guide the UK Government’s AI strategy.

Continue Reading AI Update: The Future of AI Policy in the UK

On 18 January 2021, the UK Parliamentary Office of Science and Technology (“POST”)* published its AI and Healthcare Research Briefing about the use of artificial intelligence (“AI”) in the UK healthcare system (the “Briefing”).  The Briefing considers the potential impacts of AI on the cost and quality of healthcare, and the challenges posed by the wider adoption of AI, including safety, privacy and health inequalities.

The Briefing summarises the different possible applications of AI in healthcare settings, which raises unique considerations for healthcare providers.  It notes that AI, developed through machine learning algorithms, is not yet widely used within the NHS, but some AI products are at various stages of trial and evaluation.  The areas of healthcare identified by the Briefing as having the potential for AI to be incorporated include (among others): interpretation of medical imaging, planning patients’ treatment, and patient-facing applications such as voice assistants, smartphone apps and wearable devices.


Continue Reading AI Update: UK Parliament Research Briefing on AI in the UK Healthcare System

On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.


Continue Reading The Council of Europe Publishes Feasibility Study on Developing a Legal Instrument for Ethical AI

In this edition of our regular roundup on legislative initiatives related to artificial intelligence (AI), cybersecurity, the Internet of Things (IoT), and connected and autonomous vehicles (CAVs), we focus on key developments in the European Union (EU).

Continue Reading AI, IoT, and CAV Legislative Update: EU Spotlight (Third Quarter 2020)

On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AItrade-offs, and bias and discrimination, all covered in Covington blogs).

Continue Reading UK ICO publishes guidance on Artificial Intelligence

NHSX recently published “A Buyer’s Checklist for AI in Health and Care” (Guidance) that sets out 10 key questions which will be of use to parties deploying AI solutions or conducting data driven projects (in a health and care setting or otherwise).  For example, the Guidance highlights:

  • key data-related considerations, such

On February 10, 2020, the UK Government’s Committee on Standards in Public Life* (the “Committee”) published its Report on Artificial Intelligence and Public Standards (the “Report”). The Report examines potential opportunities and hurdles in the deployment of AI in the public sector, including how such deployment may implicate the “Seven Principles of Public Life” applicable to holders of public office, also known as the “Nolan Principles” (available here). It also sets out practical recommendations for use of AI in public services, which will be of interest to companies supplying AI technologies to the public sector (including the UK National Health Service (“NHS”)), or offering public services directly to UK citizens on behalf of the UK Government. The Report elaborates on the UK Government’s June 2019 Guide to using AI in the public sector (see our previous blog here).

Continue Reading UK Government’s Advisory Committee Publishes Report on Public Sector Use of AI

In this final instalment of our series of blogs on the European Commission’s plans for AI and data, announced on 19 February 2020, we discuss some potential effects on companies in the digital health sector. As discussed in our previous blog posts (here, here and here), the papers published by the European Commission cover broad concepts and apply generally — but, in places, they specifically mention healthcare and medical devices.

The Commission recognizes the important role that AI and big data analysis can play in improving healthcare, but also notes the specific risks that could arise given the effects that such new technologies may have on individuals’ health, safety, and fundamental rights. The Commission also notes that existing EU legislation already affords a high level of protection for individuals, including through medical devices laws and data protection laws. The Commission’s proposals therefore focus on addressing the gap between these existing rules and the residual risks that remain in respect of new technologies. Note that the Commission’s proposals in the White Paper on AI are open for public consultation until 19 May 2020.


Continue Reading European Commission’s Plans for AI and Data: Focus on Digital Health (Part 4 of 4)

We and the third parties that provide content, functionality, or business services on our website may use cookies to collect information about your browsing activities in order to provide you with more relevant content and promotional materials, on and off the website, and help us understand your interests and improve the website. Privacy Policy

AcceptReject