On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”). The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions. The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.
The ICO invites organizations with experience of considering these complex issues to provide their views. This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI. See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.
The ICO identifies that the following trade-offs may arise in AI projects:
- Accuracy vs. privacy. Large amounts of data are needed to improve the accuracy of AI systems but this may impact the privacy rights of the individuals involved.
- Fairness vs. accuracy. Certain factors need to be removed from AI algorithms to ensure that AI systems are fair and do not discriminate individuals on the basis of any protected characteristics (as well as known proxies, such as postcode as a proxy for race). However, this may impact the accuracy of the AI system.
- Fairness vs. privacy. In order to test whether an AI system is discriminatory, it needs to be tested using data labelled by protected characteristics, but this may be restricted under privacy law (i.e., under the rules on processing special category personal data).
- Explainability vs. accuracy. For complex AI systems, it may be difficult to explain the logic of the system in an easy-to-understand way that is also accurate. The ICO considers, however, that this trade-off between explainability and accuracy is often a false dichotomy. See our previous blog post on the ICO’s separate report on explaining AI for more on the topic.
- Explainability vs. security. Providing detailed explanations about the logic of an AI system may lead to inadvertently disclosing information in the process that can be used to infer private information about the individuals whose personal data was used to build the AI system. The ICO recognizes that this area is under active research, and the full extent of the risks are not yet known.
The ICO recommends that organizations take the following steps in order to manage these trade-offs that may arise:
- Identify and assess existing or potential trade-offs;
- Consider available technical means to minimize trade-offs;
- Have clear criteria and lines of accountability for making trade-off decisions, including a “robust, risk-based and independent approval process”;
- Explain trade-offs to data subjects or humans reviewing the AI outputs;
- Continue to regularly review trade-offs.
The ICO makes a number of additional recommendations. For example:
- Organizations should document decisions to an “auditable standard”, including, where required, by performing a Data Protection Impact Assessment. Such documentation should: (i) consider the risks to individuals’ personal data, (ii) use a methodology to identify and assess trade-offs; (iii) provide a rational for final decisions; and (iv) explain how the decision aligns with the organization’s risk appetite.
- When outsourcing AI solutions, assessing trade-offs should form part of organizations’ due diligence of third parties. Organizations should ensure they can request that solutions be modified to strike the right balance between the trade-offs identified above.
In the final section of the blog, the ICO offers some worked examples demonstrating mathematical approaches to help organizations visualize and make decisions to balance the trade-offs. Although elements of trade-offs can be precisely quantified in some cases, the ICO recognizes that not all aspects of privacy and fairness can be fully quantified. The ICO therefore recommends that such methods should “always be supplemented with a more holistic approach”.
The ICO has published a separate blog post on the use of fully automated decision making AI systems and the right to human intervention under the GDPR. The ICO provides practical advice for organizations on how to ensure compliance with the GDPR, such as: (i) consider necessary requirements to support a meaningful human review; (ii) provide training for human reviewers; and (iii) support and incentivize staff to escalate concerns raised by data subjects. For more information, read the ICO’s blog here.
The ICO intends to publish a formal consultation paper on the framework for auditing AI in January 2020, followed by the final AI Auditing Framework in the spring. In the meantime, the ICO welcomes feedback on its current thinking, and has provided a dedicated email address to obtain views (available at the bottom of the blog). We will continue to monitor the ICO’s developments in this area and will keep you apprised on this blog.