The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI. The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence. Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.
The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’. (Previous blog posts that track this issue are available here.) Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems). Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.
Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020. A final version is expected to be published later next year.
The draft guidance focuses on how organizations that develop, test or deploy AI systems should explain automated decisions about individuals that produce a legal or other significant effect (within the meaning of GDPR Art. 22). Although this scope is quite specific, the guidance may influence broader discussions on AI transparency and explainability in the UK and at the EU-level.
The draft guidance is presented in three separate parts:
Part 1. The basics of explaining AI
This first part notes that the GDPR requires organizations to explain AI-assisted decisions to individuals where such decisions are made without human involvement and produce legal or similarly significant effects on individuals (citing GDPR Articles 22, 13, 14 and 15).
The ICO sets out four key principles — guided by the GDPR — in relation to explaining AI decision-making systems. For each principle, the ICO identifies different types of explanations that should be provided to individuals, as set out below.
- Be transparent: Organizations should make it obvious that AI is being used to make decisions and explain the decisions to individuals in a meaningful way. This means providing:
- an explanation of the reasons that led to a decision, delivered in an accessible and non-technical way (a rationale explanation); and
- an explanation about the data that has been used and how it has been used to (i) come to a particular decision and (ii) train and test the AI model (a data explanation).
- Be accountable: Organizations should ensure appropriate oversight of AI decision systems, and be answerable to others. This means providing:
- an explanation of who is involved and responsible for developing, managing and implementing an AI system within the relevant organization, and who to contact for a human review of a decision (a responsibility explanation).
- Consider context: The guidance recognizes that there is no one-size-fits-all approach to explain AI-assisted decisions. When considering how to explain decisions, organizations should take into account the sector, the particular use case and the impact of the AI system on the individual.
- Reflect on impacts: The ICO encourages organizations to ask and answer questions about ethical purposes and objectives at the initial stages of AI projects. Organizations should explain the steps that they take during the design and implementation of an AI system to:
- mitigate risks of unfair bias and discrimination, and to ensure that individuals are being treated equitably (a fairness explanation); and
- maximise the accuracy, reliability, security and robustness of its decisions and behaviours (a safety and performance explanation).
Organizations should also explain the impact that the use of an AI system and its decisions has or may have on an individual, and on wider society (an impact explanation).
Part 2. Explaining AI in practice
The second part of the draft guidance sets out practical steps that organizations can take to explain AI-assisted decisions and provide explanations to individuals. The ICO stresses that different approaches may be appropriate for different applications of AI and depending on the context in which they are used.
To help with this exercise, the ICO provides checklists of questions and technical guidance on specific AI models that organizations should take into account when developing different types of explanations.
The ICO specifically calls out “black box” or opaque AI systems, which it understands to be any AI system whose inner workings and rationale are opaque or inaccessible to human understanding (e.g., neural networks, ensemble methods, and support vector machines). The ICO suggests that these should only be used (i) if organizations have thoroughly considered their potential impacts and risks in advance; and (ii) if supplemental tools to interpret such systems are available to provide affected individuals with meaningful information.
The ICO also provides its own analysis of the possible uses and interpretability of eleven different types of AI algorithms (e.g., linear regression, decision tree, support vector machines, artificial neural net, etc.)
This part of the guidance provides examples of how organizations can select the appropriate types of explanations to prioritize depending on context (for AI-assisted recruitment and AI-assisted medical diagnosis). Annex I to this Part 2 of the draft guidance also contains a step-by-step example of building an explanation of an AI-assisted cancer diagnosis tool.
Part 3. What explaining AI means for your organization
In the final and third part of the draft guidance, the ICO explains the various roles, policies, procedures and documentation that organizations could put into place to ensure that an organization is in a position to provide meaningful explanations to individuals. The draft guidance notes that anyone involved in the decision-making pipeline has a role to play in providing an explanation of an AI system. The ICO recommends that organizations should create or update existing policies and procedures to codify the roles and responsibilities for explaining AI systems, including in relation to data collection, model selection, explanation extraction/delivery and impact assessment, amongst others.