Photo of Mark Young

Mark Young advises clients on data protection, cybersecurity and other tech regulatory matters. He has particular expertise in product counselling, GDPR regulatory investigations, and legislative advocacy. Mr. Young leads on EU cybersecurity regulatory matters, and helps to oversee our internet enforcement team.

He has been recognized in Chambers UK as "a trusted adviser - practical, results-oriented and an expert in the field." Recent editions note that he is "deeply knowledgeable in the area of privacy and data protection," "fast, thorough and responsive," and has "great insight into the regulators."

Mr. Young has over 15 years of experience advising global companies, particularly in the technology, health and pharmaceutical sectors, on all aspects of data protection and security. This includes providing practical guidance on analyzing and using personal data, transferring personal data across borders, and potential liability exposure. He specializes in advising in relation to new products and services, and providing strategic advice and advocacy on a range of EU law reform issues and references to the EU Court of Justice.

For cybersecurity matters, he counsels clients on practices to protect business-critical information and comply with national and sector-specific regulation, and on preparing for and responding to cyber-based attacks and internal threats to their networks and information. He has helped a range of organizations respond to cyber and data security incidents – including external data breaches and insider theft of trade secrets – through the stages of initial detection, containment, notification, recovery and remediation.

In the IP enforcement space, Mr. Young represents right owners in the sport, media, publishing, fashion and luxury goods industries, and helps coordinate a team of internet investigators that has nearly two decades of experience conducting global notice and takedown programs to combat internet piracy.

In addition to releasing the new EU Cybersecurity Strategy before the holidays (see our post here), the Commission published a revised Directive on measures for high common level of cybersecurity across the Union (“NIS2”) and a Directive on the resilience of critical entities (“Critical Entities Resilience Directive”). In this blog post, we summarize key points relating to NIS2, including more onerous security and incident reporting requirements; extending requirements to companies in the food, pharma, medical device, and chemical sectors, among others; and increased powers for regulators, including the ability to impose multi-million Euro fines.

The Commission is seeking feedback on NIS2 and the Critical Entities Resilience Directive, and recently extended its original deadline of early February to March 11, 2021 (responses can be submitted here and here).


Continue Reading Proposed New EU Cyber Rules Introduce More Onerous Requirements and Extend to More Sectors

On 25 November 2020, the European Commission published a proposal for a Regulation on European Data Governance (“Data Governance Act”).  The proposed Act aims to facilitate data sharing across the EU and between sectors, and is one of the deliverables included in the European Strategy for Data, adopted in February 2020.  (See our previous blog here for a summary of the Commission’s European Strategy for Data.)  The press release accompanying the proposed Act states that more specific proposals on European data spaces are expected to follow in 2021, and will be complemented by a Data Act to foster business-to-business and business-to-government data sharing.

The proposed Data Governance Act sets out rules relating to the following:

  • Conditions for reuse of public sector data that is subject to existing protections, such as commercial confidentiality, intellectual property, or data protection;
  • Obligations on “providers of data sharing services,” defined as entities that provide various types of data intermediary services;
  • Introduction of the concept of “data altruism” and the possibility for organisations to register as a “Data Altruism Organisation recognised in the Union”; and
  • Establishment of a “European Data Innovation Board,” a new formal expert group chaired by the Commission.


Continue Reading The European Commission publishes a proposal for a Regulation on European Data Governance (the Data Governance Act)

The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI.  The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.  Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.

The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’.  (Previous blog posts that track this issue are available here.)  Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems).  Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.

Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020.  A final version is expected to be published later next year.


Continue Reading UK ICO and The Alan Turing Institute Issue Draft Guidance on Explaining Decisions Made by AI

On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”).  The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions.  The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.

The ICO invites organizations with experience of considering these complex issues to provide their views.  This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI.  See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.


Continue Reading ICO publishes blog post on AI and trade-offs between data protection principles

On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organisations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.
Continue Reading ICO’s Interim Report on Explaining AI