The European Commission, as part of the launch of its digital strategy for the next five years, published on 19 February 2020 a White Paper On Artificial Intelligence – A European approach to excellence and trust (the “White Paper”).  (See our previous blog here for a summary of all four of the main papers published by the Commission.)  The White Paper recognizes the opportunities AI presents to Europe’s digital economy, and presents the Commission’s vision for a coordinated approach to promoting the uptake of AI in the EU and addressing the risks associated with certain uses of AI.  The White Paper is open for public consultation until 19 May 2020.

Promoting the uptake of AI in the EU: An Ecosystem of Excellence

The Commission notes that, in order for Europe to seize the opportunities presented by AI, Europe will need to foster “an ecosystem of excellence” that can support the development and uptake of AI across the EU economy and public administration.  To this end, the Commission plans to take the following actions (amongst others):

  • to review and update its 2018 Coordinated Plan on AI;
  • facilitate the creation of AI excellence and testing centres;
  • set up a new public-private partnership in AI, data and robotics;
  • invest in educating and upskilling the workforce to develop AI skills; and
  • promote the adoption of AI by the public sector.

Addressing the risks associated with AI: An Ecosystem of Trust

Despite the transformative potential of AI, the Commission recognizes that certain uses of AI present challenges and risks that the existing EU legislative framework may not be well-suited to address.  Although the White Paper does not set out a concrete framework for new AI legislation, it does set out the Commission’s key priorities in that regard.

A core element of the Commission’s proposals for a potential regulatory framework for AI is the introduction of a mandatory pre-marketing conformity assessment requirement that would apply to “high-risk” AI applications.  The White Paper states that an AI application would be considered high risk only if it meets the following two cumulative criteria:

  1. Whether the AI was deployed in a high-risk sector; the White Paper states that any future legislation should “specifically and exhaustively” list such sectors, and mentions healthcare, transport, energy and parts of the public sector as examples of sectors that are likely to be “high-risk”; and
  2. whether the intended use — or the manner in which it is deployed — is likely to raise significant risks for any individual or company, in particular from the viewpoint of safety, consumer rights and fundamental rights.

The Commission adds that certain types or applications of AI could be deemed high risk regardless of the sector in which it was deployed, and specifically calls out the use of AI in the recruitment and broader employment context, and remote biometric identification systems (e.g., surveillance using facial recognition technology), as two examples.  Note that, on remote biometric identification systems, the Commission intends to “launch a broad European debate on the specific circumstances, if any, which might justify such use, and on common safeguards” (p.22).

The Commission envisages that, under the mandatory pre-marketing conformity assessment, high-risk AI systems could be assessed on certain requirements, including the following:

  • Training data. Requirements to ensure that AI systems are trained on data sets that are sufficiently broad and representative to cover all relevant scenarios and do not lead to dangerous situations or outcomes entailing prohibited discrimination.
  • Keeping records and data. Requirements to keep accurate records regarding the dataset used to train and test AI systems; documentation on the programming and training techniques used to build, test and validate the AI systems; and, in some cases, the training datasets themselves.
  • Information provision. Requirements to provide notice to users when they are interacting with AI systems; requirements to provide clear information about the AI system’s capabilities, limitations, the purposes for which it is intended, the conditions under which it should function, and the expected level of accuracy in achieving the specified purpose.
  • Robustness and accuracy. Requirements to ensure that AI systems are robust, accurate, and can deal with errors or inconsistences during all phases of its life cycle.
  • Human oversight. Requirements to ensure that there is an appropriate level of human oversight over the AI system, including for affected individuals or organizations to seek human review.  The Commission notes that the type and degree of human oversight is likely to depend on the context in which the AI system is deployed.

The Commission plans to ensure that developers, deployers, and other economic operators who are “best placed to address risks” (p. 22) should be subject to these requirements.  Compliance assessments—which may include testing, inspection, and certification requirements—would be undertaken by approved Member State bodies, or by bodies in third countries subject to applicable mutual recognition agreements.  Such compliance assessments could involve testing (and potential disclosure) of both the AI algorithms and the data used to train them.  The Commission also states that it plans to make any such requirements applicable to “all relevant economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU or not” (p.22).

In addition to the mandatory pre-marketing conformity assessment requirement for high-risk AI, the Commission proposes a voluntary labelling scheme for non-high-risk AI applications. This scheme would allow interested suppliers of non-high-risk AI to be awarded “a quality label” for their AI applications that users can easily recognize.  Although entirely voluntary, once a supplier opted to use the label, the requirements would be binding.

Finally, in addition to these new proposals, the Commission intends to review the current EU product safety and liability regimes to address risks associated with products and services involving AI.  A more in-depth discussion of these issues can be found in the Commission’s Report on the safety and liability implications of AI, IoT and robotics (the “Report”), accompanying the White Paper.

The Commission notes in both the White Paper and the Report that there is an extensive body of existing EU product safety rules, including sector-specific rules, that goes some of the way to protect users with regard to certain AI applications.  The Commission nonetheless intends to explore options for reforming the existing product safety rules, including the following (amongst others):

  • requiring new risk assessments if the product is subject to important changes during its lifetime (particularly relevant for self-learning AI;
  • requiring developers to address the risks to safety of faulty data at the design stage and to ensure that quality of data is maintained throughout the use of AI applications; and
  • requiring transparency of algorithms to ensure product safety (particularly relevant for opaque or “black-box” AI systems).

With regard to product liability rules, the Commission also notes that, while the EU’s Product Liability Directive provides a layer of protection at the EU level with regard to certain products, the fault-based liability regimes in many Member States that apply in other scenarios (e.g., to stand-alone software and services) may not be sufficient to protect those harmed by AI applications in all scenarios — particularly given that many AI applications are built through a complex supply chain.  The Commission seeks to ensure that compensation is always available for damage caused by products that are defective because of software or other digital features.  To this end, the Commission is considering pursing an EU-level initiative to adapt the burden of proof required by national liability rules for damage caused by the operation of AI applications, and is seeking views on whether and to what extent strict liability may need to be introduced.

As stated above, the White Paper and the accompanying Report are open for public consultation until 19 May 2020.  Please contact the Covington team for a more detailed analysis of these proposals or to input into the consultation.  Stay tuned for further updates.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more than two decades, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, artificial intelligence, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services.

Lisa also supports Covington’s disputes team in litigation involving technology providers.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Photo of Anna Oberschelp de Meneses Anna Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate…

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate international data protection law projects.

She has obtained a certificate for “corporate data protection officer” by the German Association for Data Protection and Data Security (“Gesellschaft für Datenschutz und Datensicherheit e.V.”). She is also Certified Information Privacy Professional Europe (CIPPE/EU) by the International Association of Privacy Professionals (IAPP).

Anna also advises companies in the field of EU consumer law and has been closely tracking the developments in this area.

Her extensive language skills allow her to monitor developments and help clients tackle EU Data Privacy, Cybersecurity and Consumer Law issues in various EU and ROW jurisdictions.