Go to official publication on EDPB website.

Adopted on 17 December 2024.

Executive summary

AI technologies create many opportunities and benefits across a wide range of sectors and social activities.

By protecting the fundamental right to data protection, GDPR supports these opportunities and promotes other EU fundamental rights, including the right to freedom of thought, expression and information, the right to education or the freedom to conduct a business. In this way, GDPR is a legal framework that encourages responsible innovation.

In this context, taking into account the data protection questions raised by these technologies, the Irish supervisory authority requested the EDPB to issue an opinion on matters of general application pursuant to Article 64(2) GDPR. The request relates to the processing of personal data in the context of the development and deployment phases of Artificial Intelligence (“AI”) models. In more details, the request asked: (1) when and how an AI model can be considered as ‘anonymous’; (2) how controllers can demonstrate the appropriateness of legitimate interest as a legal basis in the development and (3) deployment phases; and (4) what are the consequences of the unlawful processing of personal data in the development phase of an AI model on the subsequent processing or operation of the AI model.

With respect to the first question, the Opinion mentionsthat claims of an AI model’s anonymity should be assessed by competent SAs on a case-by-case basis, since the EDPB considers that AI models trained with personal data cannot, in all cases, be considered anonymous. For an AI model to be considered anonymous, both (1) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to develop the model and (2) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant, taking into account ‘all the means reasonably likely to be used’ by the controller or another person.

To conduct their assessment, SAs should review the documentation provided by the controller to demonstrate the anonymity of the model. In that regard, the Opinion provides a non-prescriptive and non-exhaustive list of methods that may be used by controllers in their demonstration of anonymity, and thus be considered by SAs when assessing a controller’s claim of anonymity. This covers, for instance, the approaches taken by controllers, during the development phase, to prevent or limit the collection of personal data used for training, to reduce their identifiability, to prevent their extraction or to provide assurance regarding state of the art resistance to attacks.

With respect to the second and third questions, the Opinion provides general considerations for SAs to take into account when assessing whether controllers can rely on legitimate interest as an appropriate legal basis for processing conducted in the context of the development and the deployment of AI models.

The Opinion recalls that there is no hierarchy between the legal bases provided by the GDPR, and that it is for controllers to identify the appropriate legal basis for their processing activities. The Opinion then recalls the three-step test that should be conducted when assessing the use of legitimate interest as a legal basis, i.e. (1) identifying the legitimate interest pursued by the controller or a third party; (2) analysing the necessity of the processing for the purposes of the legitimate interest(s) pursued (also referred to as “necessity test”); and (3) assessing that the legitimate interest(s) is (are) not overridden by the interests or fundamental rights and freedoms of the data subjects (also referred to as “balancing test”).

With respect to the first step, the Opinion recalls that an interest may be regarded as legitimate if the following three cumulative criteria are met: the interest (1) is lawful; (2) is clearly and precisely articulated; and (3) is real and present (i.e. not speculative). Such interest may cover, for instance, in the development of an AI model – developing the service of a conversational agent to assist users, or in its deployment – improving threat detection in an information system.

With respect to the second step, the Opinion recalls that the assessment of necessity entails considering: (1) whether the processing activity will allow for the pursuit of the legitimate interest; and (2) whether there is no less intrusive way of pursuing this interest. When assessing whether the condition of necessity is met, SAs should pay particular attention to the amount of personal data processed and whether it is proportionate to pursue the legitimate interest at stake, also in light of the data minimisation principle.

With respect to the third step, the Opinion recalls that the balancing test should be conducted taking into account the specific circumstances of each case. It then provides an overview of the elements that SAs may take into account when evaluating whether the interest of a controller or a third party is overridden by the interests, fundamental rights and freedoms of data subjects.

As part of the third step, the Opinion highlights specific risks to fundamental rights that may emerge either in the development or the deployment phases of AI models. It also clarifies that the processing of personal data that takes place during the development and deployment phases of AI models may impact data subjects in different ways, which may be positive or negative. To assess such impact, SAs may consider the nature of the data processed by the models, the context of the processing and the possible further consequences of the processing.

The Opinion additionally highlights the role of data subjects’ reasonable expectations in the balancing test. This can be important due to the complexity of the technologies used in AI models and the fact that it may be difficult for data subjects to understand the variety of their potential uses, as well as the different processing activities involved. In this regard, both the information provided to data subjects and the context of the processing may be among the elements to be considered to assess whether data subjects can reasonably expect their personal data to be processed. With regard to the context, this may include: whether or not the personal data was publicly available, the nature of the relationship between the data subject and the controller (and whether a link exists between the two), the nature of the service, the context in which the personal data was collected, the source from which the data was collected (i.e., the website or service where the personal data was collected and the privacy settings they offer), the potential further uses of the model, and whether data subjects are actually aware that their personal data is online at all.

The Opinion also recalls that, when the data subjects’ interests, rights and freedoms seem to override the legitimate interest(s) being pursued by the controller or a third party, the controller may consider introducing mitigating measures to limit the impact of the processing on these data subjects. Mitigating measures should not be confused with the measures that the controller is legally required to adopt anyway to ensure compliance with the GDPR. In addition, the measures should be tailored to the circumstances of the case and the characteristics of the AI model, including its intended use. In this respect, the Opinion provides a non-exhaustive list of examples of mitigating measures in relation to the development phase (also with regard to web scraping) and the deployment phase. Mitigating measures may be subject to rapid evolution and should be tailored to the circumstances of the case. Therefore, it remains for the SAs to assess the appropriateness of the mitigating measures implemented on a case-by-case basis.

With respect to the fourth question, the Opinion generally recalls that SAs enjoy discretionary powers to assess the possible infringement(s) and choose appropriate, necessary, and proportionate measures, taking into account the circumstances of each individual case. The Opinion then considers three scenarios.

Under scenario 1, personal data is retained in the AI model (meaning that the model cannot be considered anonymous, as detailed in the first question) and is subsequently processed by the same controller (for instance in the context of the deployment of the model). The Opinion states that whether the development and deployment phases involve separate purposes (thus constituting separate processing activities) and the extent to which the lack of legal basis for the initial processing activity impacts the lawfulness of the subsequent processing, should be assessed on a case-by-case basis, depending on the context of the case.

Under scenario 2, personal data is retained in the model and is processed by another controller in the context of the deployment of the model. In this regard, the Opinion states that SAs should take into account whether the controller deploying the model conducted an appropriate assessment, as part of its accountability obligations to demonstrate compliance with Article 5(1)(a) and Article 6 GDPR, to ascertain that the AI model was not developed by unlawfully processing personal data. This assessment should take into account, for instance, the source of the personal data and whether the processing in the development phase was subject to the finding of an infringement, particularly if it was determined by a SA or a court, and should be less or more detailed depending on the risks raised by the processing in the deployment phase.

Under scenario 3, a controller unlawfully processes personal data to develop the AI model, then ensures that it is anonymised, before the same or another controller initiates another processing of personal data in the context of the deployment. In this regard, the Opinion states that if it can be demonstrated that the subsequent operation of the AI model does not entail the processing of personal data, the EDPB considers that the GDPR would not apply. Hence, the unlawfulness of the initial processing should not impact the subsequent operation of the model. Further, the EDPB considers that, when controllers subsequently process personal data collected during the deployment phase, after the model has been anonymised, the GDPR would apply in relation to these processing operations. In these cases, the Opinion considers that, as regards the GDPR, the lawfulness of the processing carried out in the deployment phase should not be impacted by the unlawfulness of the initial processing.

The European Data Protection Board Having regard to Article 63 and Article 64(2) of the Regulation 2016/679/EU of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (hereinafter “GDPR”), Having regard to the EEA Agreement and in particular to Annex XI and Protocol 37 thereof, as amended by the Decision of the EEA joint Committee No 154/2018 of 6 July 2018, Having regard to Article 10 and Article 22 of its Rules of Procedure, Whereas:

(1) The main role of the European Data Protection Board (hereafter the “Board” or the “EDPB”) is to ensure the consistent application of the GDPR throughout the European Economic Area (“EEA”). Article 64(2) GDPR provides that any supervisory authority (“SA”), the Chair of the Board or the Commission may request that any matter of general application or producing effects in more than one EEA Member State be examined by the Board with a view to obtaining an opinion. The aim of this opinion is to examine a matter of general application or which produces effects in more than one EEA Member State.

(2) The opinion of the Board shall be adopted pursuant to Article 64(3) GDPR in conjunction with Article 10(2) of the EDPB Rules of Procedure within eight weeks from when the Chair and the competent supervisory authority have decided that the file is complete. Upon decision of the Chair, this period may be extended by a further six weeks taking into account the complexity of the subject matter.

(more…)

In alignment with the ongoing concerns from several European data protection authorities publishing guidelines on data scrapping (i.e., the Dutch DPA, the Italian DPA and the UK Information Commissioner’s Office), the Global Privacy Assembly (GPA)’s International Enforcement Cooperation Working Group (IEWG) recently published a Joint statement on data scraping and the protection of privacy (signed by the Canadian, British, Australian, Swiss, Norwegian, Moroccan, Mexican, and Jersey data protection authorities) to provide further input for businesses when considering data.

The statement emphasizes that:

Even publicly accessible data is subject to privacy laws across most jurisdictions – meaning that scraping activities must comply with data protection regulations requiring a (i) lawful basis for data collection and, (ii) transparency with individuals, including obtaining consent where necessary.

Collecting mass data can constitute a reportable data breach if it includes unauthorized access to personal data.

Relying on platform terms (e.g., Instagram) for data scraping does not automatically ensure compliance as (i) this contractually authorized use of scraped personal data is not automatically compliant with data protection and artificial intelligence (AI) laws, and (ii) it is difficult to determine whether scraped data is used solely for purposes allowed by the contract terms.

When training AI models, it is critical to adhere not only to privacy regulations but also to emerging AI laws as ensuring AI model transparency and data processing limitations is now increasingly expected by privacy regulators.

The sensitivity of this topic underscores the close relationship between data protection and the ever-data-hungry artificial intelligence industry.

First Publication on K&L Gates Cyber Law Watch blog, in collaboration with Anna Gaentzhirt

  1. My company is not established in the EU. Should I really worry about the EU Data Act applying to my company?
  2. What are the operational impacts of the EU Data Act on my products‘ interface?
  3. My products are already on the market, can I still provide them as I am today?
  4. What data is in the EU Data Act scope?
  5. Does the EU Data Act provide for a harmonized framework for blockchain-based smart contracts?
  6. Who can request the sharing of data?
  7. How should data be made available?
  8. Are there any limitations on how the data can be shared?
  9. Can I invoke intellectual property right to forego the data sharing?
  10. Should the data be made available to public entities as well?
  11. Will I need to update my contracts as well?
  12. Will the data be required to stay in the European Union?
  13. When will all this become an operational reality for me?
  14. What are the EU Data Act penalties?
(more…)

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

(Text with EEA relevance)

(more…)

A Practice Note highlighting issues to consider when counseling a prospective buyer of an AI company. This Note discusses the primary due diligence issues relating to AI and machine learning (ML) and strategies to mitigate or allocate risks in the context of an M&A transaction. This Note is also helpful for AI company targets that seek to anticipate potential issues. In this Note, the term AI company refers to a company involved in the research, development, or monetization of a product or service that is primarily powered by an ML algorithm or model that creates functionality or utility through the use of AI.

Read the full article on Practical Law, written in collaboration with by Annette Becker, Alex V. Imas, Jake Bernstein, Mark H. Wittow, Melanie Bruneau, Marion Baumann, Kenneth S. Knox, Julie F. Rizzo, Cameron Abbott, Thomas Nietsch, and Nicole H. Buckley.

K&L Gates LLP covers a myriad of IT and internet issues, from GDRP compliance to contract negotiation. The firm is notable for its expertise in IP and data protection matters, as well as, increasingly, AI, NFT and blockchain issues. The practice is led by Claude-Etienne Armingaud, who is dual-qualified in France and the US, and is consequently well placed to handle multi-jurisdictional transactions.

Practice head(s): Claude-Etienne Armingaud

(more…)

Part IV of our series “Regulating AI: The Potential Impact of Global Regulation of Artificial Intelligence” will focus on recent developments in general availability of AI and how generative AI solutions are leading regulators, at a global level, to consider legal frameworks to protect both individuals affected by AI and digital sovereignty.

The program will feature a panel addressing the EU AI Act, on which a preliminary political agreement was reached last December and unanimously approved by the ambassadors of the 27 countries of the European Union on 2 February 2024, prior to its upcoming final votes.

Like the GDPR before it, the EU AI Act will be a trailblazing piece of legislation which will impact companies at global level.

Our panelists will discuss the consequences of the EU AI Act on companies contemplating the provision of AI solutions in the EU market or leveraging AI in the EU, with a special focus on non-EU companies.

Additional topics in our Regulating AI — The Potential Impact of Global Regulation of Artificial Intelligence series include:  

  • Part I – 13 September 2023 (EU / U.K.) – View Recording
  • Part II – 7 December 2023 (Asia-Pacific Region: China, Hong Kong, Singapore, Japan) – View Recording
  • Part III – 12 December 2023 (United States)

Register or watch the replay here.

Access the full text of the EU AI Act here.

The Information Commissioner’s Office (ICO) recently launched a consultation series on how data protection laws should apply to the development and use of generative AI models (“Gen AI”). In the coming months, the ICO will publish further views on how to interpret specific requirements of UK GDPR and Part 2 of the DPA 2018 in relation to Gen AI. This first part of the consultation focusses on whether it is lawful to train Gen AI on personal data scraped from the web. The consultation seeks feedback from stakeholders with an interest in Gen AI.

As outlined by the ICO, web scraping will involve the collection and processing of personal data, which may not have been placed online directly by the data subjects themselves. To comply with the UK GDPR, Gen AI developers would need to ensure there is a valid lawful basis for their processing under UK GDPR, as well as comply with the relevant information requirements pertaining to indirect personal data collection.

For the first part of the consultation series, the ICO published a policy position on the lawful basis for training Gen AI models on web-scraped data which can be found here. More specifically, this consultation focusses on the ‘legitimate interest’ lawful basis under art. 6(1)(f) UK GDPR and the ‘three-part’ test that a data controller must pass to meet the legitimate interest basis (a so-called Legitimate Interest Assessment). The ICO has considered various actions that Gen AI developers could take to meet this three-part legitimate interest test to guarantee that the collection of training data through web scraping, i.e. processing of data, is complaint with the principles of UK GDPR. The ICO would now like to hear from relevant stakeholders on their view of the proposed regulatory approach and the impact this would have on their organisation. A link to the survey can be found here.

The deadline to submit a response is 1 March 2024.

First publication: K&L Gates Cyber Law Watch blog with Sophie Verstraeten

Join our session as we explore the implications of the EU AI Act. In this webinar, we’ll:

Featured speakers

Yücel Hamzaoğlu​

Partner
HHK Legal

Melike Hamzaoğlu

Partner
HHK Legal

Claude-Étienne Armingaud​

Partner
KL Gates

Noshin Khan​

Ethics & Compliance, Associate Director
OneTrust​

Harry Chambers

Senior Privacy Analyst
OneTrust

Register here.

Quoted in Agenda article “New EU AI Rules Will Have Global Impact“:

The scope of the EU AI Act will apply to all companies whose AI systems are used or affect EU-based individuals, according to Claude-Etienne Armingaud, a partner in K&L Gates’ Paris office and a member of the law firm’s technology transactions and sourcing practice group.

Due to its breadth, global companies developing AI systems, most of which are headquartered either in the U.S. or in China, will face two options: “Get in line with the EU AI Act or abstain from the EU market,” Armingaud said.

Some companies threatened to exit the European market after the EU’s General Data Protection Regulation, or GDPR, became effective in 2018, but many didn’t actually follow through, according to Armingaud.

“So, without a doubt, all companies dabbling in AI will need to comply if they truly want to remain global,” he said.

Agenda – New EU AI Rules Will Have Global Impact