The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI.  The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.  Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.

The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’.  (Previous blog posts that track this issue are available here.)  Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems).  Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.

Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020.  A final version is expected to be published later next year.

The draft guidance focuses on how organizations that develop, test or deploy AI systems should explain automated decisions about individuals that produce a legal or other significant effect (within the meaning of GDPR Art. 22).  Although this scope is quite specific, the guidance may influence broader discussions on AI transparency and explainability in the UK and at the EU-level.

The draft guidance is presented in three separate parts:

Part 1. The basics of explaining AI

This first part notes that the GDPR requires organizations to explain AI-assisted decisions to individuals where such decisions are made without human involvement and produce legal or similarly significant effects on individuals (citing GDPR Articles 22, 13, 14 and 15).

The ICO sets out four key principles — guided by the GDPR — in relation to explaining AI decision-making systems.  For each principle, the ICO identifies different types of explanations that should be provided to individuals, as set out below.

  1. Be transparent: Organizations should make it obvious that AI is being used to make decisions and explain the decisions to individuals in a meaningful way. This means providing:
  • an explanation of the reasons that led to a decision, delivered in an accessible and non-technical way (a rationale explanation); and
  • an explanation about the data that has been used and how it has been used to (i) come to a particular decision and (ii) train and test the AI model (a data explanation).
  1. Be accountable: Organizations should ensure appropriate oversight of AI decision systems, and be answerable to others. This means providing:
  • an explanation of who is involved and responsible for developing, managing and implementing an AI system within the relevant organization, and who to contact for a human review of a decision (a responsibility explanation).
  1. Consider context: The guidance recognizes that there is no one-size-fits-all approach to explain AI-assisted decisions. When considering how to explain decisions, organizations should take into account the sector, the particular use case and the impact of the AI system on the individual.
  2. Reflect on impacts: The ICO encourages organizations to ask and answer questions about ethical purposes and objectives at the initial stages of AI projects. Organizations should explain the steps that they take during the design and implementation of an AI system to:
  • mitigate risks of unfair bias and discrimination, and to ensure that individuals are being treated equitably (a fairness explanation); and
  • maximise the accuracy, reliability, security and robustness of its decisions and behaviours (a safety and performance explanation).

Organizations should also explain the impact that the use of an AI system and its decisions has or may have on an individual, and on wider society (an impact explanation).

Part 2. Explaining AI in practice

The second part of the draft guidance sets out practical steps that organizations can take to explain AI-assisted decisions and provide explanations to individuals.  The ICO stresses that different approaches may be appropriate for different applications of AI and depending on the context in which they are used.

To help with this exercise, the ICO provides checklists of questions and technical guidance on specific AI models that organizations should take into account when developing different types of explanations.

The ICO specifically calls out “black box” or opaque AI systems, which it understands to be any AI system whose inner workings and rationale are opaque or inaccessible to human understanding (e.g., neural networks, ensemble methods, and support vector machines). The ICO suggests that these should only be used (i) if organizations have thoroughly considered their potential impacts and risks in advance; and (ii) if supplemental tools to interpret such systems are available to provide affected individuals with meaningful information.

The ICO also provides its own analysis of the possible uses and interpretability of eleven different types of AI algorithms (e.g., linear regression, decision tree, support vector machines, artificial neural net, etc.)

This part of the guidance provides examples of how organizations can select the appropriate types of explanations to prioritize depending on context (for AI-assisted recruitment and AI-assisted medical diagnosis).  Annex I to this Part 2 of the draft guidance also contains a step-by-step example of building an explanation of an AI-assisted cancer diagnosis tool.

Part 3. What explaining AI means for your organization

In the final and third part of the draft guidance, the ICO explains the various roles, policies, procedures and documentation that organizations could put into place to ensure that an organization is in a position to provide meaningful explanations to individuals.  The draft guidance notes that anyone involved in the decision-making pipeline has a role to play in providing an explanation of an AI system.  The ICO recommends that organizations should create or update existing policies and procedures to codify the roles and responsibilities for explaining AI systems, including in relation to data collection, model selection, explanation extraction/delivery and impact assessment, amongst others.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Mark Young Mark Young

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the…

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the firm. In these contexts, he has worked closely with some of the world’s leading technology and life sciences companies and other multinationals.

Mark has been recognized for several years in Chambers UK as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” “provides thoughtful, strategic guidance and is a pleasure to work with;” and has “great insight into the regulators.” According to the most recent edition (2024), “He’s extremely technologically sophisticated and advises on true issues of first impression, particularly in the field of AI.”

Drawing on over 15 years of experience, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology, e.g., AI, biometric data, and connected devices.
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
  • Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • Counseling ad networks (demand and supply side), retailers, and other adtech companies on data privacy compliance relating to programmatic advertising, and providing strategic advice on complaints and claims in a range of jurisdictions.
  • Advising life sciences companies on industry-specific data privacy issues, including:
    • clinical trials and pharmacovigilance;
    • digital health products and services; and
    • engagement with healthcare professionals and marketing programs.
  • International conflict of law issues relating to white collar investigations and data privacy compliance (collecting data from employees and others, international transfers, etc.).
  • Advising various clients on the EU NIS2 Directive and UK NIS regulations and other cybersecurity-related regulations, particularly (i) cloud computing service providers, online marketplaces, social media networks, and other digital infrastructure and service providers, and (ii) medical device and pharma companies, and other manufacturers.
  • Helping a broad range of organizations prepare for and respond to cybersecurity incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, supply chain incidents, and state-sponsored attacks. Mark’s incident response expertise includes:
    • supervising technical investigations and providing updates to company boards and leaders;
    • advising on PR and related legal risks following an incident;
    • engaging with law enforcement and government agencies; and
    • advising on notification obligations and other legal risks, and representing clients before regulators around the world.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of UK and EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.
Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.