AI Auditing Framework

The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI.  The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.  Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.

The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’.  (Previous blog posts that track this issue are available here.)  Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems).  Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.

Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020.  A final version is expected to be published later next year.Continue Reading UK ICO and The Alan Turing Institute Issue Draft Guidance on Explaining Decisions Made by AI

On June 25, 2019, as part of their continuing work on the AI Auditing Framework, the UK Information Commissioner’s Office (ICO) published a blog setting out their views on human bias and discrimination in AI systems. The ICO has also called for input on specific questions relating to human
Continue Reading ICO’s Call for Input on Bias and Discrimination in AI systems