Quoted in Agenda article “New EU AI Rules Will Have Global Impact“:

The scope of the EU AI Act will apply to all companies whose AI systems are used or affect EU-based individuals, according to Claude-Etienne Armingaud, a partner in K&L Gates’ Paris office and a member of the law firm’s technology transactions and sourcing practice group.

Due to its breadth, global companies developing AI systems, most of which are headquartered either in the U.S. or in China, will face two options: “Get in line with the EU AI Act or abstain from the EU market,” Armingaud said.

Some companies threatened to exit the European market after the EU’s General Data Protection Regulation, or GDPR, became effective in 2018, but many didn’t actually follow through, according to Armingaud.

“So, without a doubt, all companies dabbling in AI will need to comply if they truly want to remain global,” he said.

Agenda – New EU AI Rules Will Have Global Impact

It has been some time already since the EU Digital Services Act (Regulation 2022/2065, DSA) was published, and since then, the discussions about Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) have dominated the media coverage (see initial press release of European Commission here and coverage about VLOPs/VLOSEs petitions against categorization as VLOPs/VLOSEs here and here). 

Smaller online service providers tend to forget that they may also face some new obligations under the DSA from 17 February 2024 onwards, but would be well advised to comply to avoid significant sanctions (e.g., fines of up to 6% of the global annual turnover or periodic penalty payments up to 5% of the global average daily turnover). 

The following paragraphs provide a brief summary of the most relevant content of the DSA and will help online service providers to understand:

  • If and to what extent the DSA applies to them;
  • What specific obligations exist; and
  • What sanctions may be applied in case of breach.
(more…)

The UK’s Information Commissioner’s Office (the “ICO”) has recently sent warnings to the UK’s most visited websites to inform them that they may face enforcement action if they do not make changes to their cookie banner to ensure compliance with UK data protection law. For example, some websites warned by the ICO do not provide their user with a fair choice on tracking for personalised advertising. This position aligns with the EU’s stance, noting France (see prior Alert here).

The ICO’s actions are part of a larger commitment to ensure individuals’ privacy rights are upheld by companies active in the online advertising industry. Publishers receiving a warning only have 30 days to amend their websites in line with UK GDPR. As further incentive for publishers to get compliant, the ICO has also warned that it will publish the details of those websites that have not made the requested changes in January. Such publicity may be even less welcome than the potentially large fines associated with breach of the data protection framework.

The statement made by the ICO highlights once again the importance for companies to review how cookies are used on their websites and how their cookie banners, along with the cookie consent management solution, are displayed. To be compliant, websites must make it as easy as possible for users to reject all advertising cookies. Personalized advertising can be compliant as long as it is based on the user’s consent. In case users reject all advertising cookies, websites can only show general adverts that are not tailored to the users’ browsing history. Consequently, websites should display a cookie banner that makes it as easy for users to reject cookies, as it is for them to accept cookies.

The ICO’s guidance in relation to cookie banners can be found here, which may need to be further updated with the newly presented Data Protection and Digital Information Bill.

First publication: Cyber Law Watch Blog with Sophie Verstraeten

Post-Brexit EU businesses have needed to rethink how they approach showing compliance with a host of regulations, managing international data transfers and building trust with data subjects. Having to comply with the GDPR, prepare for other data protection bills, all while continuing to comply with the EU-GDPR as well as a host of global regulations means businesses might look to certification as a common system for adequacy as a one-stop shop, when addressing the overlaps and more crucially closing the gaps on their privacy compliance programs.

Featured speakers:

  • Noshin Khan, Senior Compliance Counsel, Ethics Center of Excellence, OneTrust 
  • Claude-Étienne Armingaud, Partner, K&L Gates

Register here.

The UK Government has laid adequacy regulations before Parliament that, once in force from 12 October 2023, will permit use of the UK – US “Data Bridge” as a safeguard for personal data transfers from the UK to the US under Article 44 UK GDPR.

The UK – US “Data Bridge,” AKA the UK Extension to the EU – US Data Privacy Framework (Framework), allows UK organisations to transfer personal data to organisations located in the United States that have self-certified their compliance with certain data protection principles and appear on the Data Privacy Framework List. This scheme, administered by the US Department of Commerce, provides a redress mechanism for data subjects in the European Union to enforce their rights under the EU General Data Protection Regulation, in relation to a participating US organisation’s compliance with the Framework, and to US national security agencies’ access to personal data. This new redress mechanism attempts to prevent a challenge to the Framework similar to the Schrems II case, which invalidated the Framework’s predecessor EU – US Privacy Shield. Despite this, the Framework has already been the subject of a short-lived case at the Court of Justice of the EU, and there may be more legal challenges.

Alongside the adequacy regulations, the UK government published an analysis of the US laws relating to US national security agencies’ access to the personal data of European data subjects. This analysis effectively completes the international data transfer risk assessment (TRA), which UK organisations have been required to carry out before transferring personal data to the US. It is likely that UK organisations relying on the other Article 44 UK GDPR safeguards, such as the International Data Transfer Agreement, may also rely on this analysis in place of completing a TRA.

First publication: K&L Gate Cyber Law Watch Blog in collaboration with Noirin McFadden

In this webinar, our lawyers discuss generative artificial intelligence (AI). Fast paced growth in generative AI is changing the way we work and live. With such changes come complex issues and uncertainty. We will address the legal, policy and ethical risks, mitigation, and best practices to consider as you develop generative AI products and services, or use generative AI in the operation of your business.

With Annette Becker, Guillermo Christensen, Whitney McCollum, Jilie Rizzo, and Mark Wittow

If you were not able to join last Tuesday, you can watch the replay below:

Source: K&L Gates Hub

Access the full text of the EU AI Act here.

On 14 June 2023, the European Parliament (Parliament) plenary voted on its position on the Artificial Intelligence Act (AI Act), which was adopted by a large majority, with 499 votes in favor, 28 against, and 93 abstentions. The newly adopted text (Parliament position) will serve as the Parliament’s negotiating position during the forthcoming interinstitutional negotiations (trilogues) with the Council of the European Union (Council) and the European Commission (Commission).

The members of Parliament (MEPs) proposed several changes to the Commission’s proposal, published on 21 April 2021, including expanding the list of high-risk uses and prohibited AI practices. Specific transparency and safety provisions were also added on foundation models and generative AI systems. MEPs also introduced a definition of AI that is aligned with the definition provided by the Organisation for Economic Co-operation and Development. In addition, the text reinforces natural persons’ (or their groups’) right to file a complaint about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights.

Definition

The Parliament position provides that AI, or an AI System, should refer to “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.” This amends the Commission’s proposal, where an AI System was solely limited to software acting for human-defined objectives and now encompasses the metaverses through the explicit inclusion of “virtual environments.”

Agreement on the final version of the definition of AI is expected to be found at the technical level during trilogue negotiations, as it does appear to be a noncontentious item.

Another notable inclusion relates to foundation models (Foundation Models) that were not yet in the public eye when the Commission’s proposal was published and were defined as a subset of AI Systemtrained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.

(more…)

Speakers:

  • Zelda Olentia, Senior Product Manager, RadarFirst
  • Claude-Étienne Armingaud, CIPP/E, Partner, Data Protection Privacy and Security Practice Group Coordinator, K&L Gates LLP

Air Date: Wednesday 14 June at 1 pm ET / 10 am PT. Replay on demand available here!

Description

Gartner predicts that by the end of 2024, 75% of the world’s population will have its personal data covered under modern privacy regulations. This exponential increase from only 10% global coverage in 2020 raises the stakes for global organizations. The challenge will be to ensure compliance, while safeguarding trust for an unprecedented volume of regulated data.

Join the upcoming live Q&A to learn what’s driving this expansion and how to prepare. You’ll hear from Zelda Olentia, Senior Product Manager at RadarFirst, and special guest, Claude-Etienne Armingaud who is a partner at K&L Gates LLP and a coordinator for the Firm’s Data Protection, Privacy, and Security practice group.

In this session we will cover:

  What is driving the expansion of privacy regulation?

  Where are we on this path towards 65% global coverage?

  How do you scale privacy operations for international privacy laws quickly and effectively before year-end 2024?

Register Now >>

Version 2.1 dated 24 May 2023 – Go to the official PDF version.

Executive Summary

The European Data Protection Board (EDPB) has adopted these guidelines to harmonise the methodology supervisory authorities use when calculating of the amount of the fine. These Guidelines complement the previously adopted Guidelines on the application and setting of administrative fines for the purpose of the Regulation 2016/679 (WP253), which focus on the circumstances in which to impose a fine.

The calculation of the amount of the fine is at the discretion of the supervisory authority, subject to the rules provided for in the GDPR. In that context, the GDPR requires that the amount of the fine shall in each individual case be effective, proportionate and dissuasive (Article 83(1) GDPR). Moreover, when setting the amount of the fine, supervisory authorities shall give due regard to a list of circumstances that refer to features of the infringement (its seriousness) or of the character of the perpetrator (Article 83(2) GDPR). Lastly, the amount of the fine shall not exceed the maximum amounts provided for in Articles 83(4) (5) and (6) GDPR. The quantification of the amount of the fine is therefore based on a specific evaluation carried out in each case, within the parameters provided for by the GDPR.

Taking the abovementioned into account, the EDPB has devised the following methodology, consisting of five steps, for calculating administrative fines for infringements of the GDPR.

Firstly, the processing operations in the case must be identified and the application of Article 83(3) GDPR needs to be evaluated (Chapter 3). Second, the starting point for further calculation of the amount  of  the fine needs to be identified (Chapter 4). This is done by evaluating the classification of the infringement in the GDPR, evaluating the seriousness of the infringement in light of the circumstances of the case, and evaluating the turnover of the undertaking. The third step is the evaluation of aggravating and mitigating circumstances related to past or present behaviour of the controller/processor and increasing or decreasing the fine accordingly (Chapter 5). The fourth step is identifying the relevant legal maximums for the different infringements. Increases applied in previous or next steps cannot exceed this maximum amount (Chapter 6). Lastly, it needs to be analysed whether the calculated final amount meets the requirements of effectiveness, dissuasiveness and proportionality. The fine can still be adjusted accordingly (Chapter 7), however without exceeding the relevant legal maximum.

Throughout all abovementioned steps, it must be borne in mind that the calculation of a fine is no mere mathematical exercise. Rather, the circumstances of the specific case are the determining factors leading to the final amount, which can – in all cases – be any amount up to and including the legal maximum.

These Guidelines and its methodology will remain under constant review of the EDPB.

(more…)

Closing in on the fifth anniversary of the entry into force of the EU General Data Protection Regulation (GDPR), the Irish Data Protection Commission (DPC) announced on 22 May 2023 that it had fined Meta for EUR 1,2b (USD 1.3b), the highest GDPR fine levied since 2018.

Further to the DPC decision (Decision), and in addition to the record fine, Meta will need to:

  • suspend any future transfers of personal data to the United States within five months from the date of notification of the decision to Meta Ireland;
  • ensure the compliance of its data processing operations by ceasing the unlawful processing, including storage, in the United States of personal data of its users in the European Economic Area, transferred without sufficient safeguards, within six months from the date of notification of the DPC’s decision to Meta Ireland.

The core of the grievances relates to a decade-long (and going) crusade initiated by datactivist Maximilien Schrems and its data protection association, None of Your Business (noyb). The crusade started in 2013, with a first step resulting in a resounding cancelation of the Safe Harbor framework, which allowed personal data to be freely transferred from the European Union to the United States, in the 2015 Schrems I case (see our Alert). It was subsequently followed by a same action against Safe Habor’s successor, the Privacy Shield Framework, leading to the same result in the Schrems II case (see our Alerts here, here and here).

(more…)