Artificial Intelligence (AI)

As we anticipated in a previous blog post, on April 22, 2020, the European Data Protection Board (“EDPB”) issued new guidelines on the use of location data and contact tracing apps in the context of the present COVID-19 pandemic.

The EDPB’s new guidelines complement and build on similar guidance previously issued by the Board itself (see herehere and here), and by the European Commission (see our blog post here).

The EDPB’s close scrutiny over the use of mobile data and apps in the context of the ongoing public health crisis is unsurprising, as many EU Member States have launched—or are in the process of launching—contact tracing apps to fight the spread of the virus, and these initiatives are receiving great attention by data privacy authorities and the general public (see our blog post here).

The guidelines aim to clarify the data protection conditions and principles that should be followed when:

  • using location data to model the spread of the virus to assess the overall effectiveness of confinement measures; and
  • using contact tracing apps, which aim to notify individuals who may have been in close proximity to someone who is infected or confirmed as a carrier of the virus, in order to break the contamination chain as early as possible.

The EDPB stresses that EU data protection rules have been designed to be flexible and, as such, do not stand in the way of an efficient response to the pandemic.  However, it notes that governments and private actors should be mindful of a number of considerations when they use data-driven solutions in response to the COVID-19 outbreak.

Continue Reading EDPB Issues New Guidance on the Use of Location Data and Contact Tracing in the Context of the COVID-19 Outbreak

On April 8, 2020, the Federal Trade Commission (“FTC”) released a blog post about the use of artificial intelligence (“AI”) and algorithms in automated decisionmaking. The blog highlighted the potentially great benefits and risks presented by increasingly sophisticated technologies, particularly in the “Health AI” space. However, it also emphasized that automated decisionmaking is not a

The rapid spread of COVID-19, along with the effectiveness of existing public health response plans and the impacts of social distancing on the economy, have raised the question of how new technology can be used to address and manage the pandemic. On April 1, 2020, the Stanford Institute for Human-Centered Artificial Intelligence hosted “COVID-19 and AI: A Virtual Conference” to explore the potential applications of artificial intelligence (“AI”) in diagnostics and treatment, epidemiological tracking and forecasting of the spread of COVID-19, and the pandemic’s impacts on the economy, culture, and human behavior.

Continue Reading AI Update: Using Artificial Intelligence to Combat COVID-19

The COVID-19 crisis is demonstrating the potential of digital health technology to manage some of our greatest public health challenges.  The White House Office of Science and Technology Policy has issued a call to action for technology companies to help the science community answer high-priority scientific questions related to COVID-19.  The Centers for Disease Control

On February 10, 2020, the UK Government’s Committee on Standards in Public Life* (the “Committee”) published its Report on Artificial Intelligence and Public Standards (the “Report”). The Report examines potential opportunities and hurdles in the deployment of AI in the public sector, including how such deployment may implicate the “Seven Principles of Public Life” applicable to holders of public office, also known as the “Nolan Principles” (available here). It also sets out practical recommendations for use of AI in public services, which will be of interest to companies supplying AI technologies to the public sector (including the UK National Health Service (“NHS”)), or offering public services directly to UK citizens on behalf of the UK Government. The Report elaborates on the UK Government’s June 2019 Guide to using AI in the public sector (see our previous blog here).

Continue Reading UK Government’s Advisory Committee Publishes Report on Public Sector Use of AI

In this final instalment of our series of blogs on the European Commission’s plans for AI and data, announced on 19 February 2020, we discuss some potential effects on companies in the digital health sector. As discussed in our previous blog posts (here, here and here), the papers published by the European Commission cover broad concepts and apply generally — but, in places, they specifically mention healthcare and medical devices.

The Commission recognizes the important role that AI and big data analysis can play in improving healthcare, but also notes the specific risks that could arise given the effects that such new technologies may have on individuals’ health, safety, and fundamental rights. The Commission also notes that existing EU legislation already affords a high level of protection for individuals, including through medical devices laws and data protection laws. The Commission’s proposals therefore focus on addressing the gap between these existing rules and the residual risks that remain in respect of new technologies. Note that the Commission’s proposals in the White Paper on AI are open for public consultation until 19 May 2020.

Continue Reading European Commission’s Plans for AI and Data: Focus on Digital Health (Part 4 of 4)

On 19 February 2020, the new European Commission published two Communications relating to its five-year digital strategy: one on shaping Europe’s digital future, and one on its European strategy for data (the Commission also published a white paper proposing its strategy on AI; see our previous blogs here and here).  In both Communications, the Commission sets out a vision of the EU powered by digital solutions that are strongly rooted in European values and EU fundamental rights.  Both Communications also emphasize the intent to strengthen “European technological sovereignty”, which in the Commission’s view will enable the EU to define its own rules and values in the digital age.  The Communications set out the Commission’s plans to achieve this vision.

Continue Reading European Commission’s plans on data and Europe’s digital future (Part 3 of 4)

The European Commission, as part of the launch of its digital strategy for the next five years, published on 19 February 2020 a White Paper On Artificial Intelligence – A European approach to excellence and trust (the “White Paper”).  (See our previous blog here for a summary of all four of the main papers published by the Commission.)  The White Paper recognizes the opportunities AI presents to Europe’s digital economy, and presents the Commission’s vision for a coordinated approach to promoting the uptake of AI in the EU and addressing the risks associated with certain uses of AI.  The White Paper is open for public consultation until 19 May 2020.

Continue Reading European Commission’s White Paper on Artificial Intelligence (Part 2 of 4)

On 19 February 2020, the European Commission presented its long-awaited strategies for data and AI.  These follow Commission President Ursula von der Leyen’s commitment upon taking office to put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within the new Commission’s first 100 days.  Although the papers published this week do not set out a comprehensive EU legal framework for AI, they do give a clear indication of the Commission’s key priorities and anticipated next steps.

The Commission strategies are set out in four separate papers—two on AI, and one each on Europe’s digital future and the data economy.  Read together, it is clear that the Commission seeks to position the EU as a digital leader, both in terms of trustworthy AI and the wider data economy.

Continue Reading European Commission Presents Strategies for Data and AI (Part 1 of 4)

The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI.  The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.  Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.

The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’.  (Previous blog posts that track this issue are available here.)  Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems).  Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.

Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020.  A final version is expected to be published later next year.

Continue Reading UK ICO and The Alan Turing Institute Issue Draft Guidance on Explaining Decisions Made by AI