On 19 September 2019, the European Parliamentary Research Service (“EPRS”)—the European Parliament’s in-house research service—released a briefing paper that summarizes the current status of the EU’s approach to developing a regulatory framework for ethical AI.  Although not a policymaking body, the EPRS can provide useful insights into the direction of EU policy on an issue.  The paper summarises recent calls in the EU for adopting legally binding instruments to regulate AI, in particular to set common rules on AI transparency, set common requirements for fundamental rights impact assessments, and provide an adequate legal framework for facial recognition technology.

The briefing paper follows publication of the European Commission’s high-level expert group’s Ethics Guidelines for Trustworthy Artificial Intelligence (the “Guidelines”), and the announcement by incoming Commission President Ursula von der Leyen that she will put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within her first 100 days in office.

The paper starts by summarizing the Guidelines’ requirements for achieving “Trustworthy AI,” which we discuss in our previous blog post.  (See also our blog post on the pilot phase of the Guidelines and the related Policy and Investment Recommendations.)  The paper identifies several challenges that stakeholders have identified in implementing the Guidelines, including:

  • lack of clarity in the wording of the Guidelines;
  • lack of a hierarchy of principles in the Guidelines;
  • lack of regulatory oversight to monitor the implementation and enforcement of the Guidelines; and
  • potential fragmentation across Member States (noting in particular the separate national developments in France, Germany, Finland, and the UK).

The paper then discusses possible legislative actions the EU might take in the future, including: (1) further clarification of the Guidelines; (2) development of standards on ethical AI and certification frameworks for AI systems, based on industry initiatives; (3) and developing regulatory frameworks on AI.  On regulatory frameworks, the paper outlines a number of proposals on AI legislation that have been discussed, including legislation on transparency of decision-making systems (including the creation of a regulatory body for algorithmic decision-making), sector specific legislation in the health sector, and legislation on facial recognition technology.  The paper notes that “[a] number of legally binding instruments could be adopted to translate ethical rules into hard law, and make them mandatory for the most influential AI industry players in the EU”.  The paper concludes by reviewing the regulatory developments on AI ethics in the U.S., China, Canada, Australia and other countries focusing on government actions on the issue.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Marty Hansen Marty Hansen

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade…

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade Organization agreements, treaties administered by the World Intellectual Property Organization, bilateral and regional free trade agreements, and other trade agreements.

Drawing on ten years of experience in Covington’s London and DC offices his practice focuses on helping innovative companies solve challenges on intellectual property and trade matters before U.S. courts, the U.S. government, and foreign governments and tribunals. Martin also represents software companies and a leading IT trade association on electronic commerce, Internet security, and online liability issues.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.