The European Commission, as part of the launch of its digital strategy for the next five years, published on 19 February 2020 a White Paper On Artificial Intelligence – A European approach to excellence and trust (the “White Paper”).  (See our previous blog here for a summary of all four of the main papers published by the Commission.)  The White Paper recognizes the opportunities AI presents to Europe’s digital economy, and presents the Commission’s vision for a coordinated approach to promoting the uptake of AI in the EU and addressing the risks associated with certain uses of AI.  The White Paper is open for public consultation until 19 May 2020.

Promoting the uptake of AI in the EU: An Ecosystem of Excellence

The Commission notes that, in order for Europe to seize the opportunities presented by AI, Europe will need to foster “an ecosystem of excellence” that can support the development and uptake of AI across the EU economy and public administration.  To this end, the Commission plans to take the following actions (amongst others):

  • to review and update its 2018 Coordinated Plan on AI;
  • facilitate the creation of AI excellence and testing centres;
  • set up a new public-private partnership in AI, data and robotics;
  • invest in educating and upskilling the workforce to develop AI skills; and
  • promote the adoption of AI by the public sector.

Addressing the risks associated with AI: An Ecosystem of Trust

Despite the transformative potential of AI, the Commission recognizes that certain uses of AI present challenges and risks that the existing EU legislative framework may not be well-suited to address.  Although the White Paper does not set out a concrete framework for new AI legislation, it does set out the Commission’s key priorities in that regard.

A core element of the Commission’s proposals for a potential regulatory framework for AI is the introduction of a mandatory pre-marketing conformity assessment requirement that would apply to “high-risk” AI applications.  The White Paper states that an AI application would be considered high risk only if it meets the following two cumulative criteria:

  1. Whether the AI was deployed in a high-risk sector; the White Paper states that any future legislation should “specifically and exhaustively” list such sectors, and mentions healthcare, transport, energy and parts of the public sector as examples of sectors that are likely to be “high-risk”; and
  2. whether the intended use — or the manner in which it is deployed — is likely to raise significant risks for any individual or company, in particular from the viewpoint of safety, consumer rights and fundamental rights.

The Commission adds that certain types or applications of AI could be deemed high risk regardless of the sector in which it was deployed, and specifically calls out the use of AI in the recruitment and broader employment context, and remote biometric identification systems (e.g., surveillance using facial recognition technology), as two examples.  Note that, on remote biometric identification systems, the Commission intends to “launch a broad European debate on the specific circumstances, if any, which might justify such use, and on common safeguards” (p.22).

The Commission envisages that, under the mandatory pre-marketing conformity assessment, high-risk AI systems could be assessed on certain requirements, including the following:

  • Training data. Requirements to ensure that AI systems are trained on data sets that are sufficiently broad and representative to cover all relevant scenarios and do not lead to dangerous situations or outcomes entailing prohibited discrimination.
  • Keeping records and data. Requirements to keep accurate records regarding the dataset used to train and test AI systems; documentation on the programming and training techniques used to build, test and validate the AI systems; and, in some cases, the training datasets themselves.
  • Information provision. Requirements to provide notice to users when they are interacting with AI systems; requirements to provide clear information about the AI system’s capabilities, limitations, the purposes for which it is intended, the conditions under which it should function, and the expected level of accuracy in achieving the specified purpose.
  • Robustness and accuracy. Requirements to ensure that AI systems are robust, accurate, and can deal with errors or inconsistences during all phases of its life cycle.
  • Human oversight. Requirements to ensure that there is an appropriate level of human oversight over the AI system, including for affected individuals or organizations to seek human review.  The Commission notes that the type and degree of human oversight is likely to depend on the context in which the AI system is deployed.

The Commission plans to ensure that developers, deployers, and other economic operators who are “best placed to address risks” (p. 22) should be subject to these requirements.  Compliance assessments—which may include testing, inspection, and certification requirements—would be undertaken by approved Member State bodies, or by bodies in third countries subject to applicable mutual recognition agreements.  Such compliance assessments could involve testing (and potential disclosure) of both the AI algorithms and the data used to train them.  The Commission also states that it plans to make any such requirements applicable to “all relevant economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU or not” (p.22).

In addition to the mandatory pre-marketing conformity assessment requirement for high-risk AI, the Commission proposes a voluntary labelling scheme for non-high-risk AI applications. This scheme would allow interested suppliers of non-high-risk AI to be awarded “a quality label” for their AI applications that users can easily recognize.  Although entirely voluntary, once a supplier opted to use the label, the requirements would be binding.

Finally, in addition to these new proposals, the Commission intends to review the current EU product safety and liability regimes to address risks associated with products and services involving AI.  A more in-depth discussion of these issues can be found in the Commission’s Report on the safety and liability implications of AI, IoT and robotics (the “Report”), accompanying the White Paper.

The Commission notes in both the White Paper and the Report that there is an extensive body of existing EU product safety rules, including sector-specific rules, that goes some of the way to protect users with regard to certain AI applications.  The Commission nonetheless intends to explore options for reforming the existing product safety rules, including the following (amongst others):

  • requiring new risk assessments if the product is subject to important changes during its lifetime (particularly relevant for self-learning AI;
  • requiring developers to address the risks to safety of faulty data at the design stage and to ensure that quality of data is maintained throughout the use of AI applications; and
  • requiring transparency of algorithms to ensure product safety (particularly relevant for opaque or “black-box” AI systems).

With regard to product liability rules, the Commission also notes that, while the EU’s Product Liability Directive provides a layer of protection at the EU level with regard to certain products, the fault-based liability regimes in many Member States that apply in other scenarios (e.g., to stand-alone software and services) may not be sufficient to protect those harmed by AI applications in all scenarios — particularly given that many AI applications are built through a complex supply chain.  The Commission seeks to ensure that compensation is always available for damage caused by products that are defective because of software or other digital features.  To this end, the Commission is considering pursing an EU-level initiative to adapt the burden of proof required by national liability rules for damage caused by the operation of AI applications, and is seeking views on whether and to what extent strict liability may need to be introduced.

As stated above, the White Paper and the accompanying Report are open for public consultation until 19 May 2020.  Please contact the Covington team for a more detailed analysis of these proposals or to input into the consultation.  Stay tuned for further updates.