This series of webinars will address the potential impacts of artificial intelligence (AI) regulations on business across the globe. Recent developments in general availability of AI and generative AI solutions are leading regulators, at a global level, to consider legal frameworks to protect both individuals affected by AI and digital sovereignty. Our panelists will address these potential regulatory developments, as well as the expected timeline for these changes, region by region.  

Our first panel will feature a discussion focused on current and future regulatory requirements on the AI industry throughout the EU and the UK. With the language of the EU’s Al Act heading into its trialogue, it is even more important for stakeholders to understand the EU’s approach and prepare for the potential impact of this regulation in Europe, UK, and beyond. The panelists will address key questions, such as:

  • What new undertaking will be bearing on the stakeholders in this industry?
  • Will government regulation be “technology neutral”?
  • Could the various frameworks lead to conflicts for local compliance efforts?
  • Will a requirement for an AI system to explain its thinking or provide substantive sources for all results have a deleterious impact on its ability to “think” independently?  
  • Is it too late for stakeholders to have a say in these expected frameworks?

Speakers:

Claude-Étienne Armingaud | PARTNER | PARIS

Giovanni Campi | POLICY DIRECTOR | BRUSSELS

Jennifer Marsh | PARTNER | LONDON

Register here: K&L Gates Website

Watch the recording here.

Access the full text of the EU AI Act here.

August may be perceived as the month where France shuts down for the summer. Yet, just before the summer ’23 holiday, the French Data Protection Authority (“CNIL”) published several call to action for the various players of the data ecosystems in general and in artificial intelligence (AI) in particular, following its 16 May 2023 announcement of an AI action plan:

  • Opening and re-use of publicly accessible data – The CNIL published a draft guidance on the such data usage, and all stakeholders are invited to weight in until 15 October 2023 before its finalization. While non-binding, this guidance is expected to lead the way on how the EU’s Supervisory Authority will apprehend and enforce the General Data Protection Regulation (“GDPR”) when personal data is scraped from online sources and subsequently used for subsequent purposes. This notably focuses on Art. 14 GDPR and the indirect collection of personal data and specific prior information requirements. Artificial Intelligence is explicitly mentioned by the CNIL in the draft, as such data, which feeds large-language models, “undeniably contributes to the development of the digital economy and is at the core of artificial intelligence.” Stakeholders are invited to submit their observations online through the dedicated portal.
  • Artificial Intelligence Sandbox – Following in the footsteps of its connected cameras, EdTech & eHealth initiatives, the CNIL is launching an AI sandbox call for projects, where stakeholders involved in AI in connection with public services may apply to receive dedicated assistance by the regulator to co-construct AI systems complying with data protection and privacy rules.
  • Creation of databases for Artificial Intelligence uses – Open to the broadest possible array of stakeholders (including individuals), this call for contributions notably addresses the specific issue relating to the use of publicly accessible data and aims at informing the CNIL of the various positions at play and how to balance GDPR’s requirements (information, legitimate interests, exercise of rights) with data subjects’ expectations. Stakeholders are invited to submit their observations online through the dedicated form (in French – our free translation in English is available below)- no deadline for submission has been set.
(more…)

Thrilled to share that I’ve been shortlisted for Privacy Leader: Legal for this year’s PICCASO (Privacy, InfoSec, Culture Change & Awareness Societal Organisation) Awards.

Grateful to the award committee for the recognition and to our K&L Gates #DataProtection team as a whole who is a constant source of motivation, motivation and fun even in complex moments (especially in Europe cc Ulrike Elteste (Mahlmann) Noirin McFadden Andreas Müller Veronica Muratori Thomas Nietsch Camille Scarparo). Also psyched to be among such a roster nominees, whether in this category or the others as a whole. Whoever gets awarded, it’ll always be a win for #privacy!

Looking forward to celebrate with you all in person in London!

K&L Gates ranked “Recommended” with Claude-Etienne Armingaud.

Source: Leaders League

(more…)

Once again included in the Best Lawyers in France ranking for Privacy and Data Security Law

Source: Best Lawyers

In this webinar, our lawyers discuss generative artificial intelligence (AI). Fast paced growth in generative AI is changing the way we work and live. With such changes come complex issues and uncertainty. We will address the legal, policy and ethical risks, mitigation, and best practices to consider as you develop generative AI products and services, or use generative AI in the operation of your business.

With Annette Becker, Guillermo Christensen, Whitney McCollum, Jilie Rizzo, and Mark Wittow

If you were not able to join last Tuesday, you can watch the replay below:

Source: K&L Gates Hub

Access the full text of the EU AI Act here.

On 14 June 2023, the European Parliament (Parliament) plenary voted on its position on the Artificial Intelligence Act (AI Act), which was adopted by a large majority, with 499 votes in favor, 28 against, and 93 abstentions. The newly adopted text (Parliament position) will serve as the Parliament’s negotiating position during the forthcoming interinstitutional negotiations (trilogues) with the Council of the European Union (Council) and the European Commission (Commission).

The members of Parliament (MEPs) proposed several changes to the Commission’s proposal, published on 21 April 2021, including expanding the list of high-risk uses and prohibited AI practices. Specific transparency and safety provisions were also added on foundation models and generative AI systems. MEPs also introduced a definition of AI that is aligned with the definition provided by the Organisation for Economic Co-operation and Development. In addition, the text reinforces natural persons’ (or their groups’) right to file a complaint about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights.

Definition

The Parliament position provides that AI, or an AI System, should refer to “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.” This amends the Commission’s proposal, where an AI System was solely limited to software acting for human-defined objectives and now encompasses the metaverses through the explicit inclusion of “virtual environments.”

Agreement on the final version of the definition of AI is expected to be found at the technical level during trilogue negotiations, as it does appear to be a noncontentious item.

Another notable inclusion relates to foundation models (Foundation Models) that were not yet in the public eye when the Commission’s proposal was published and were defined as a subset of AI Systemtrained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.

(more…)

Speakers:

  • Zelda Olentia, Senior Product Manager, RadarFirst
  • Claude-Étienne Armingaud, CIPP/E, Partner, Data Protection Privacy and Security Practice Group Coordinator, K&L Gates LLP

Air Date: Wednesday 14 June at 1 pm ET / 10 am PT. Replay on demand available here!

Description

Gartner predicts that by the end of 2024, 75% of the world’s population will have its personal data covered under modern privacy regulations. This exponential increase from only 10% global coverage in 2020 raises the stakes for global organizations. The challenge will be to ensure compliance, while safeguarding trust for an unprecedented volume of regulated data.

Join the upcoming live Q&A to learn what’s driving this expansion and how to prepare. You’ll hear from Zelda Olentia, Senior Product Manager at RadarFirst, and special guest, Claude-Etienne Armingaud who is a partner at K&L Gates LLP and a coordinator for the Firm’s Data Protection, Privacy, and Security practice group.

In this session we will cover:

  What is driving the expansion of privacy regulation?

  Where are we on this path towards 65% global coverage?

  How do you scale privacy operations for international privacy laws quickly and effectively before year-end 2024?

Register Now >>

Version 2.1 dated 24 May 2023 – Go to the official PDF version.

Executive Summary

The European Data Protection Board (EDPB) has adopted these guidelines to harmonise the methodology supervisory authorities use when calculating of the amount of the fine. These Guidelines complement the previously adopted Guidelines on the application and setting of administrative fines for the purpose of the Regulation 2016/679 (WP253), which focus on the circumstances in which to impose a fine.

The calculation of the amount of the fine is at the discretion of the supervisory authority, subject to the rules provided for in the GDPR. In that context, the GDPR requires that the amount of the fine shall in each individual case be effective, proportionate and dissuasive (Article 83(1) GDPR). Moreover, when setting the amount of the fine, supervisory authorities shall give due regard to a list of circumstances that refer to features of the infringement (its seriousness) or of the character of the perpetrator (Article 83(2) GDPR). Lastly, the amount of the fine shall not exceed the maximum amounts provided for in Articles 83(4) (5) and (6) GDPR. The quantification of the amount of the fine is therefore based on a specific evaluation carried out in each case, within the parameters provided for by the GDPR.

Taking the abovementioned into account, the EDPB has devised the following methodology, consisting of five steps, for calculating administrative fines for infringements of the GDPR.

Firstly, the processing operations in the case must be identified and the application of Article 83(3) GDPR needs to be evaluated (Chapter 3). Second, the starting point for further calculation of the amount  of  the fine needs to be identified (Chapter 4). This is done by evaluating the classification of the infringement in the GDPR, evaluating the seriousness of the infringement in light of the circumstances of the case, and evaluating the turnover of the undertaking. The third step is the evaluation of aggravating and mitigating circumstances related to past or present behaviour of the controller/processor and increasing or decreasing the fine accordingly (Chapter 5). The fourth step is identifying the relevant legal maximums for the different infringements. Increases applied in previous or next steps cannot exceed this maximum amount (Chapter 6). Lastly, it needs to be analysed whether the calculated final amount meets the requirements of effectiveness, dissuasiveness and proportionality. The fine can still be adjusted accordingly (Chapter 7), however without exceeding the relevant legal maximum.

Throughout all abovementioned steps, it must be borne in mind that the calculation of a fine is no mere mathematical exercise. Rather, the circumstances of the specific case are the determining factors leading to the final amount, which can – in all cases – be any amount up to and including the legal maximum.

These Guidelines and its methodology will remain under constant review of the EDPB.

(more…)

Access the full list of the EDPB and WP29 Guidelines here, including consultation versions, now-current versions and redlines between versions.