Go to official publication on EDPB website.

Adopted on 17 December 2024.

Executive summary

AI technologies create many opportunities and benefits across a wide range of sectors and social activities.

By protecting the fundamental right to data protection, GDPR supports these opportunities and promotes other EU fundamental rights, including the right to freedom of thought, expression and information, the right to education or the freedom to conduct a business. In this way, GDPR is a legal framework that encourages responsible innovation.

In this context, taking into account the data protection questions raised by these technologies, the Irish supervisory authority requested the EDPB to issue an opinion on matters of general application pursuant to Article 64(2) GDPR. The request relates to the processing of personal data in the context of the development and deployment phases of Artificial Intelligence (“AI”) models. In more details, the request asked: (1) when and how an AI model can be considered as ‘anonymous’; (2) how controllers can demonstrate the appropriateness of legitimate interest as a legal basis in the development and (3) deployment phases; and (4) what are the consequences of the unlawful processing of personal data in the development phase of an AI model on the subsequent processing or operation of the AI model.

With respect to the first question, the Opinion mentionsthat claims of an AI model’s anonymity should be assessed by competent SAs on a case-by-case basis, since the EDPB considers that AI models trained with personal data cannot, in all cases, be considered anonymous. For an AI model to be considered anonymous, both (1) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to develop the model and (2) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant, taking into account ‘all the means reasonably likely to be used’ by the controller or another person.

To conduct their assessment, SAs should review the documentation provided by the controller to demonstrate the anonymity of the model. In that regard, the Opinion provides a non-prescriptive and non-exhaustive list of methods that may be used by controllers in their demonstration of anonymity, and thus be considered by SAs when assessing a controller’s claim of anonymity. This covers, for instance, the approaches taken by controllers, during the development phase, to prevent or limit the collection of personal data used for training, to reduce their identifiability, to prevent their extraction or to provide assurance regarding state of the art resistance to attacks.

With respect to the second and third questions, the Opinion provides general considerations for SAs to take into account when assessing whether controllers can rely on legitimate interest as an appropriate legal basis for processing conducted in the context of the development and the deployment of AI models.

The Opinion recalls that there is no hierarchy between the legal bases provided by the GDPR, and that it is for controllers to identify the appropriate legal basis for their processing activities. The Opinion then recalls the three-step test that should be conducted when assessing the use of legitimate interest as a legal basis, i.e. (1) identifying the legitimate interest pursued by the controller or a third party; (2) analysing the necessity of the processing for the purposes of the legitimate interest(s) pursued (also referred to as “necessity test”); and (3) assessing that the legitimate interest(s) is (are) not overridden by the interests or fundamental rights and freedoms of the data subjects (also referred to as “balancing test”).

With respect to the first step, the Opinion recalls that an interest may be regarded as legitimate if the following three cumulative criteria are met: the interest (1) is lawful; (2) is clearly and precisely articulated; and (3) is real and present (i.e. not speculative). Such interest may cover, for instance, in the development of an AI model – developing the service of a conversational agent to assist users, or in its deployment – improving threat detection in an information system.

With respect to the second step, the Opinion recalls that the assessment of necessity entails considering: (1) whether the processing activity will allow for the pursuit of the legitimate interest; and (2) whether there is no less intrusive way of pursuing this interest. When assessing whether the condition of necessity is met, SAs should pay particular attention to the amount of personal data processed and whether it is proportionate to pursue the legitimate interest at stake, also in light of the data minimisation principle.

With respect to the third step, the Opinion recalls that the balancing test should be conducted taking into account the specific circumstances of each case. It then provides an overview of the elements that SAs may take into account when evaluating whether the interest of a controller or a third party is overridden by the interests, fundamental rights and freedoms of data subjects.

As part of the third step, the Opinion highlights specific risks to fundamental rights that may emerge either in the development or the deployment phases of AI models. It also clarifies that the processing of personal data that takes place during the development and deployment phases of AI models may impact data subjects in different ways, which may be positive or negative. To assess such impact, SAs may consider the nature of the data processed by the models, the context of the processing and the possible further consequences of the processing.

The Opinion additionally highlights the role of data subjects’ reasonable expectations in the balancing test. This can be important due to the complexity of the technologies used in AI models and the fact that it may be difficult for data subjects to understand the variety of their potential uses, as well as the different processing activities involved. In this regard, both the information provided to data subjects and the context of the processing may be among the elements to be considered to assess whether data subjects can reasonably expect their personal data to be processed. With regard to the context, this may include: whether or not the personal data was publicly available, the nature of the relationship between the data subject and the controller (and whether a link exists between the two), the nature of the service, the context in which the personal data was collected, the source from which the data was collected (i.e., the website or service where the personal data was collected and the privacy settings they offer), the potential further uses of the model, and whether data subjects are actually aware that their personal data is online at all.

The Opinion also recalls that, when the data subjects’ interests, rights and freedoms seem to override the legitimate interest(s) being pursued by the controller or a third party, the controller may consider introducing mitigating measures to limit the impact of the processing on these data subjects. Mitigating measures should not be confused with the measures that the controller is legally required to adopt anyway to ensure compliance with the GDPR. In addition, the measures should be tailored to the circumstances of the case and the characteristics of the AI model, including its intended use. In this respect, the Opinion provides a non-exhaustive list of examples of mitigating measures in relation to the development phase (also with regard to web scraping) and the deployment phase. Mitigating measures may be subject to rapid evolution and should be tailored to the circumstances of the case. Therefore, it remains for the SAs to assess the appropriateness of the mitigating measures implemented on a case-by-case basis.

With respect to the fourth question, the Opinion generally recalls that SAs enjoy discretionary powers to assess the possible infringement(s) and choose appropriate, necessary, and proportionate measures, taking into account the circumstances of each individual case. The Opinion then considers three scenarios.

Under scenario 1, personal data is retained in the AI model (meaning that the model cannot be considered anonymous, as detailed in the first question) and is subsequently processed by the same controller (for instance in the context of the deployment of the model). The Opinion states that whether the development and deployment phases involve separate purposes (thus constituting separate processing activities) and the extent to which the lack of legal basis for the initial processing activity impacts the lawfulness of the subsequent processing, should be assessed on a case-by-case basis, depending on the context of the case.

Under scenario 2, personal data is retained in the model and is processed by another controller in the context of the deployment of the model. In this regard, the Opinion states that SAs should take into account whether the controller deploying the model conducted an appropriate assessment, as part of its accountability obligations to demonstrate compliance with Article 5(1)(a) and Article 6 GDPR, to ascertain that the AI model was not developed by unlawfully processing personal data. This assessment should take into account, for instance, the source of the personal data and whether the processing in the development phase was subject to the finding of an infringement, particularly if it was determined by a SA or a court, and should be less or more detailed depending on the risks raised by the processing in the deployment phase.

Under scenario 3, a controller unlawfully processes personal data to develop the AI model, then ensures that it is anonymised, before the same or another controller initiates another processing of personal data in the context of the deployment. In this regard, the Opinion states that if it can be demonstrated that the subsequent operation of the AI model does not entail the processing of personal data, the EDPB considers that the GDPR would not apply. Hence, the unlawfulness of the initial processing should not impact the subsequent operation of the model. Further, the EDPB considers that, when controllers subsequently process personal data collected during the deployment phase, after the model has been anonymised, the GDPR would apply in relation to these processing operations. In these cases, the Opinion considers that, as regards the GDPR, the lawfulness of the processing carried out in the deployment phase should not be impacted by the unlawfulness of the initial processing.

The European Data Protection Board Having regard to Article 63 and Article 64(2) of the Regulation 2016/679/EU of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (hereinafter “GDPR”), Having regard to the EEA Agreement and in particular to Annex XI and Protocol 37 thereof, as amended by the Decision of the EEA joint Committee No 154/2018 of 6 July 2018, Having regard to Article 10 and Article 22 of its Rules of Procedure, Whereas:

(1) The main role of the European Data Protection Board (hereafter the “Board” or the “EDPB”) is to ensure the consistent application of the GDPR throughout the European Economic Area (“EEA”). Article 64(2) GDPR provides that any supervisory authority (“SA”), the Chair of the Board or the Commission may request that any matter of general application or producing effects in more than one EEA Member State be examined by the Board with a view to obtaining an opinion. The aim of this opinion is to examine a matter of general application or which produces effects in more than one EEA Member State.

(2) The opinion of the Board shall be adopted pursuant to Article 64(3) GDPR in conjunction with Article 10(2) of the EDPB Rules of Procedure within eight weeks from when the Chair and the competent supervisory authority have decided that the file is complete. Upon decision of the Chair, this period may be extended by a further six weeks taking into account the complexity of the subject matter.

(more…)

In alignment with the ongoing concerns from several European data protection authorities publishing guidelines on data scrapping (i.e., the Dutch DPA, the Italian DPA and the UK Information Commissioner’s Office), the Global Privacy Assembly (GPA)’s International Enforcement Cooperation Working Group (IEWG) recently published a Joint statement on data scraping and the protection of privacy (signed by the Canadian, British, Australian, Swiss, Norwegian, Moroccan, Mexican, and Jersey data protection authorities) to provide further input for businesses when considering data.

The statement emphasizes that:

Even publicly accessible data is subject to privacy laws across most jurisdictions – meaning that scraping activities must comply with data protection regulations requiring a (i) lawful basis for data collection and, (ii) transparency with individuals, including obtaining consent where necessary.

Collecting mass data can constitute a reportable data breach if it includes unauthorized access to personal data.

Relying on platform terms (e.g., Instagram) for data scraping does not automatically ensure compliance as (i) this contractually authorized use of scraped personal data is not automatically compliant with data protection and artificial intelligence (AI) laws, and (ii) it is difficult to determine whether scraped data is used solely for purposes allowed by the contract terms.

When training AI models, it is critical to adhere not only to privacy regulations but also to emerging AI laws as ensuring AI model transparency and data processing limitations is now increasingly expected by privacy regulators.

The sensitivity of this topic underscores the close relationship between data protection and the ever-data-hungry artificial intelligence industry.

First Publication on K&L Gates Cyber Law Watch blog, in collaboration with Anna Gaentzhirt

Launched in 2015, the EU’s Digital Single Market Strategy aimed to foster the digital harmonization between the EU member states and contribute to economic growth, boosting jobs, competition, investment and innovation in the EU.

The EU AI Act characterizes a fundamental element of this strategy. By adopting the first general-purpose regulation of artificial intelligence in the world, Brussels sent a global message to all stakeholders, in the EU and abroad, that they need to pay attention to the AI discussion happening in Europe.

The EU AI Act achieves a delicate balancing act between the specifics, including generative AI, systemic models and computing power threshold, and its general risk-based approach. To do so, the act includes a tiered implementation over a three-year period and a flexible possibility to revise some of the more factual elements that would be prone to rapid obsolescence, such as updating the threshold of the floating point operations per second — a measurement of the performance of a computer for general-purpose AI models presumed to have high impact capabilities. At the same time, the plurality of stakeholders involved in the interpretation of the act and its interplay with other adopted, currently in discussion or yet-to-come regulations will require careful monitoring by the impacted players in the AI ecosystems.

(more…)

Dans le cadre de notre nouveau cycle de conférences autour du numérique et des problématiques « cyber », nous avons le plaisir de vous convier à un petit déjeuner organisé dans nos locaux parisiens, à l’occasion duquel Claude-Etienne Armingaud, CIPP/E (Associé, Protection des données & Technologies) se penchera sur la préparation des entreprises dans le cadre de leur mise en conformité au regard du Règlement sur les Données (EU Data Act). Une belle occasion d’échanger, de s’inspirer et d’entrer en relation avec des professionnels du domaine !

Les places étant limitées, nous vous invitons à vous inscrire dès à présent via le lien suivant : https://ow.ly/183L50TAWbP.

We kindly invite you to the K&L Gates Legal & Compliance Breakfast on 8 October 2024 in Frankfurt.

Please join us for coffee, tea and croissants and take away impulses and new momentum for the work on your data strategy.

We will discuss how the Data Act and the AI Act impact a company’s data strategy. How does one reconcile them with each other and with other elements of the legal framework, like GDPR and antitrust laws?

Our key note speaker will be Claude-Étienne Armingaud, a partner at K&L Gates‘ Paris office. He coordinates our European technology and privacy practices and has been building pragmatic legal solutions on both sides of the Atlantic for many years.

We look forward to welcoming you at our Frankfurt office on level 28 of the „Opernturm“ tower.

Please register by clicking here.

Don’t miss the plenary session “AI, the future of law?” on Thursday, October 17 from 2 p.m. to 4 p.m. at the Palais du Grand Large in Saint-Malo. This event, organized by the ACE – Young Lawyers commission, will be introduced by its president Ludovic Blanc (Lawyer at the Paris Bar, President of ACE-JA national).

Our partner Claude-Etienne Armingaud, CIPP/E (Partner, Data Protection & Technologies), François GIRAULT (Lawyer at the Montpellier Bar, President of the CNB Prospective and Innovation Commission, Vice-President ACE Ouest Méditerranée, Vice-President Liberal Professions CPME 34), Philippe BARON (Lawyer at 2BMP Avocats, President of the CNB Digital Commission) and Christiane Féral-Schuhl (Lawyer at the Paris Bar in digital law, former President of the National Council of Bars, former President of the Paris Bar Association) will participate in this essential discussion on the impact of AI on the legal profession.

This meeting will be hosted by Anne-Cécile Sarfati, journalist and columnist, with a Live Show presented by Tiphaine MARY (Maître et Talons), Lawyer at the Paris Bar.

Do not hesitate to reserve your place by registering via the following link: https://lnkd.in/gJQ7qqfV.

  1. My company is not established in the EU. Should I really worry about the EU Data Act applying to my company?
  2. What are the operational impacts of the EU Data Act on my products‘ interface?
  3. My products are already on the market, can I still provide them as I am today?
  4. What data is in the EU Data Act scope?
  5. Does the EU Data Act provide for a harmonized framework for blockchain-based smart contracts?
  6. Who can request the sharing of data?
  7. How should data be made available?
  8. Are there any limitations on how the data can be shared?
  9. Can I invoke intellectual property right to forego the data sharing?
  10. Should the data be made available to public entities as well?
  11. Will I need to update my contracts as well?
  12. Will the data be required to stay in the European Union?
  13. When will all this become an operational reality for me?
  14. What are the EU Data Act penalties?
(more…)

Six years after the European Regulation 2016/679 on the protection of personal data (“GDPR”) came into force, the European Union has just adopted a new regulation targeting a better distribution of the value generated by the use of data between players in the digital economy.

Entered into force on 11 January 2024, in only 22 months, Regulation 2023/2854 regarding the harmonized rules on fair access to and use of data (Regulation on Data or “EU Data Act”) aims at broadening the scope of Europe’s digital sovereignty, beyond the boundaries of personal data alone.

(more…)

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

(Text with EEA relevance)

(more…)

A Practice Note highlighting issues to consider when counseling a prospective buyer of an AI company. This Note discusses the primary due diligence issues relating to AI and machine learning (ML) and strategies to mitigate or allocate risks in the context of an M&A transaction. This Note is also helpful for AI company targets that seek to anticipate potential issues. In this Note, the term AI company refers to a company involved in the research, development, or monetization of a product or service that is primarily powered by an ML algorithm or model that creates functionality or utility through the use of AI.

Read the full article on Practical Law, written in collaboration with by Annette Becker, Alex V. Imas, Jake Bernstein, Mark H. Wittow, Melanie Bruneau, Marion Baumann, Kenneth S. Knox, Julie F. Rizzo, Cameron Abbott, Thomas Nietsch, and Nicole H. Buckley.