Skip to content

Artificial Intelligence (AI)

On January 6, 2021, the UK’s AI Council (an independent government advisory body) published its AI Roadmap (“Roadmap”). In addition to calling for a  Public Interest Data Bill to ‘protect against automation and collective harms’, the Roadmap acknowledges the need to counteract public suspicion of AI and makes 16 recommendations, based on three main pillars, to guide the UK Government’s AI strategy.
Continue Reading AI Update: The Future of AI Policy in the UK

On 18 January 2021, the UK Parliamentary Office of Science and Technology (“POST”)* published its AI and Healthcare Research Briefing about the use of artificial intelligence (“AI”) in the UK healthcare system (the “Briefing”).  The Briefing considers the potential impacts of AI on the cost and quality of healthcare, and the challenges posed by the wider adoption of AI, including safety, privacy and health inequalities.

The Briefing summarises the different possible applications of AI in healthcare settings, which raises unique considerations for healthcare providers.  It notes that AI, developed through machine learning algorithms, is not yet widely used within the NHS, but some AI products are at various stages of trial and evaluation.  The areas of healthcare identified by the Briefing as having the potential for AI to be incorporated include (among others): interpretation of medical imaging, planning patients’ treatment, and patient-facing applications such as voice assistants, smartphone apps and wearable devices.Continue Reading AI Update: UK Parliament Research Briefing on AI in the UK Healthcare System

FDA has long recognized the significant potential of artificial intelligence- and machine learning- (AI/ML-) based software as a medical device (SaMD) to transform health care as well as the unique challenges presented by AI/ML-based software under the Agency’s traditional medical device regulatory framework.  On January 12, 2021, FDA issued the
Continue Reading FDA Releases AI Action Plan

On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.Continue Reading The Council of Europe Publishes Feasibility Study on Developing a Legal Instrument for Ethical AI

In this edition of our regular roundup on legislative initiatives related to artificial intelligence (AI), cybersecurity, the Internet of Things (IoT), and connected and autonomous vehicles (CAVs), we focus on key developments in the European Union (EU).
Continue Reading AI, IoT, and CAV Legislative Update: EU Spotlight (Third Quarter 2020)

The National Institute of Standards and Technology (“NIST”) is seeking comments on the first draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), a white paper that seeks to define the principles that capture the fundamental properties of explainable AI systems.  NIST will be accepting comments until
Continue Reading AI Standards Update: NIST Solicits Comments on the Four Principles of Explainable Artificial Intelligence and Certain Other Developments

On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AItrade-offs, and bias and discrimination, all covered in Covington blogs).
Continue Reading UK ICO publishes guidance on Artificial Intelligence

On June 4, 2020, Representatives Anna Eshoo (D-CA-18), Anthony Gonzalez (R-OH-16), and Mikie Sherrill (D-NJ-11) introduced the National AI Research Resource Task Force Act.  This bipartisan bill would create a task force to propose a roadmap for developing and sustaining a national research cloud for AI.  The cloud would
Continue Reading Bipartisan Bill Seeks to Create National Artificial Intelligence Research Resource Task Force

On May 28, the White House Office of Science and Technology Policy (OSTP) hosted a meeting of the G7 Science & Technology (S&T) Ministers to collaborate on COVID-19 response and recovery.  The G7 S&T Ministers emerged from the meeting with a declaration, in which they expressed their intent to:
Continue Reading White House Hosts G7 Science and Technology Ministerial on COVID-19