On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AItrade-offs, and bias and discrimination, all covered in Covington blogs).

The Guidance, which provides advice and recommendations on best practice in applying core GDPR principles to AI, will be of particular relevance to those that develop or integrate AI and/or machine-learning into their public-facing products and services.  The ICO suggests that organisations should adopt a risk-based approach when evaluating AI systems. The key takeaway is a familiar one: identification and mitigation of data protection risks at an early stage (i.e., the design stage) is likely to yield the best compliance results.

The Guidance has four parts, each dealing with the application of fundamental data protection principles to AI systems:

Part 1 – Accountability and Governance Implications

This section covers: (i) the use of data protection impact assessments (DPIAs) to identify and control the risks that AI systems may pose, (ii) understanding the relationship and distinction  between controllers and processors in the AI context, and (iii) managing competing interests when assessing AI-related risks (i.e., reconciling the use of sufficient AI training data with the principle of data minimisation).

The ICO’s recommendations include (among others):

  • Organisations should carry out DPIAs where appropriate. DPIAs are also a useful tool for documenting compliance with GDPR requirements, particularly those relating to accountability and “data protection by design”.
  • Organisations should ensure that the roles of the different parties in the AI supply chain are clearly mapped at the outset. Existing ICO guidance applies, and may help to identify controller/processor relationships. The AI Guidance also gives specific examples for stakeholders in the AI ecosystem.
  • If an AI system involves trade-offs between different risks, organisations should clearly document their assessments of competing interests to an auditable standard. Organisations should also document the methodology for identifying and assessing any trade-offs they have made.

Part 2 – Lawfulness, Fairness and Transparency

This section covers: (i) application of the lawfulness, fairness and transparency principles to AI systems, and (ii) how to identify appropriate purposes and legal bases in the AI context.

The ICO’s recommendations include (among others):

  • Organisations should clearly document (i) the source of any input data, (ii) whether the outputs of the AI system are “statistically informed guesses” as opposed to facts, and (iii) any inaccurate input data or statistical flaw in the AI system that might affect the quality of the output from the AI system.
  • Because the purposes and risks of processing associated with each phase often differ, organisations should consider separate legal bases for processing personal data at each stage of the AI development and deployment process. The Guidance also includes detailed recommendations for which legal bases should be used in certain situations.

Part 3 – Assessing Security and Data Minimisation

This section covers: (i) data security issues common to AI, (ii) types of privacy attacks to which AI systems are susceptible, and (iii) compliance with the principle of data minimisation.

The ICO’s recommendations include (among others):

  • Organisations should implement effective risk management practices, including by effectively tracking and managing training data, and ensuring “pipeline” security by separating the AI development environment from the rest of the organisation’s IT system.
  • Organisations should consider applying privacy-enhancing techniques (e.g., perturbation, federated learning, and the use of synthetic data) to training data to minimise the risk of tracing back to individuals.

Part 4 – Ensuring Data Subject Rights

This section covers: (i) fulfilling data subject rights in the context of data input and output of AI systems, and (ii) data subject rights in the context of automated decision-making.

The ICO’s recommendations include (among others):

  • Organisations should ensure that systems are in place to effectively respond to and comply with data subject rights requests. Organisations should avoid categorising data subject requests as “manifestly unfounded or excessive” simply because fulfilment of such requests is more challenging in the AI context.
  • Organisations should design AI systems to facilitate effective human review, and provide sufficient training to staff to ensure they can critically assess the outputs, and understand the limitations of, the AI system.

The ICO will continue to develop the Guidance, along with tools “that promote privacy by design to those developing and using AI”. This would appear to include a forthcoming “toolkit” to “provide further practical support to organisations auditing the compliance of their own AI systems”. The ICO encourages organisations to provide feedback on the Guidance to make sure that it remains relevant and consistent with emerging developments. In the Guidance, the ICO also indicates that it is planning separately to revise its Cloud Computing Guidance in 2021.

The Guidance comes a few weeks after the European Commission’s High-Level Expert Group on AI published its “Assessment List for Trustworthy Artificial Intelligence,” designed to help companies identify the risks of AI systems they develop, deploy or procure, as well as appropriate mitigation measures (the subject of a Covington blog available here).

The team at Covington will continue to monitor developments in this space.