The Information Commissioner’s Office (ICO) recently launched a consultation series on how data protection laws should apply to the development and use of generative AI models (“Gen AI”). In the coming months, the ICO will publish further views on how to interpret specific requirements of UK GDPR and Part 2 of the DPA 2018 in relation to Gen AI. This first part of the consultation focusses on whether it is lawful to train Gen AI on personal data scraped from the web. The consultation seeks feedback from stakeholders with an interest in Gen AI.

As outlined by the ICO, web scraping will involve the collection and processing of personal data, which may not have been placed online directly by the data subjects themselves. To comply with the UK GDPR, Gen AI developers would need to ensure there is a valid lawful basis for their processing under UK GDPR, as well as comply with the relevant information requirements pertaining to indirect personal data collection.

For the first part of the consultation series, the ICO published a policy position on the lawful basis for training Gen AI models on web-scraped data which can be found here. More specifically, this consultation focusses on the ‘legitimate interest’ lawful basis under art. 6(1)(f) UK GDPR and the ‘three-part’ test that a data controller must pass to meet the legitimate interest basis (a so-called Legitimate Interest Assessment). The ICO has considered various actions that Gen AI developers could take to meet this three-part legitimate interest test to guarantee that the collection of training data through web scraping, i.e. processing of data, is complaint with the principles of UK GDPR. The ICO would now like to hear from relevant stakeholders on their view of the proposed regulatory approach and the impact this would have on their organisation. A link to the survey can be found here.

The deadline to submit a response is 1 March 2024.

First publication: K&L Gates Cyber Law Watch blog with Sophie Verstraeten

Join our session as we explore the implications of the EU AI Act. In this webinar, we’ll:

Featured speakers

Yücel Hamzaoğlu​

Partner
HHK Legal

Melike Hamzaoğlu

Partner
HHK Legal

Claude-Étienne Armingaud​

Partner
KL Gates

Noshin Khan​

Ethics & Compliance, Associate Director
OneTrust​

Harry Chambers

Senior Privacy Analyst
OneTrust

Register here.

Quoted in Agenda article “New EU AI Rules Will Have Global Impact“:

The scope of the EU AI Act will apply to all companies whose AI systems are used or affect EU-based individuals, according to Claude-Etienne Armingaud, a partner in K&L Gates’ Paris office and a member of the law firm’s technology transactions and sourcing practice group.

Due to its breadth, global companies developing AI systems, most of which are headquartered either in the U.S. or in China, will face two options: “Get in line with the EU AI Act or abstain from the EU market,” Armingaud said.

Some companies threatened to exit the European market after the EU’s General Data Protection Regulation, or GDPR, became effective in 2018, but many didn’t actually follow through, according to Armingaud.

“So, without a doubt, all companies dabbling in AI will need to comply if they truly want to remain global,” he said.

Agenda – New EU AI Rules Will Have Global Impact

It has been some time already since the EU Digital Services Act (Regulation 2022/2065, DSA) was published, and since then, the discussions about Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) have dominated the media coverage (see initial press release of European Commission here and coverage about VLOPs/VLOSEs petitions against categorization as VLOPs/VLOSEs here and here). 

Smaller online service providers tend to forget that they may also face some new obligations under the DSA from 17 February 2024 onwards, but would be well advised to comply to avoid significant sanctions (e.g., fines of up to 6% of the global annual turnover or periodic penalty payments up to 5% of the global average daily turnover). 

The following paragraphs provide a brief summary of the most relevant content of the DSA and will help online service providers to understand:

  • If and to what extent the DSA applies to them;
  • What specific obligations exist; and
  • What sanctions may be applied in case of breach.
(more…)

The UK’s Information Commissioner’s Office (the “ICO”) has recently sent warnings to the UK’s most visited websites to inform them that they may face enforcement action if they do not make changes to their cookie banner to ensure compliance with UK data protection law. For example, some websites warned by the ICO do not provide their user with a fair choice on tracking for personalised advertising. This position aligns with the EU’s stance, noting France (see prior Alert here).

The ICO’s actions are part of a larger commitment to ensure individuals’ privacy rights are upheld by companies active in the online advertising industry. Publishers receiving a warning only have 30 days to amend their websites in line with UK GDPR. As further incentive for publishers to get compliant, the ICO has also warned that it will publish the details of those websites that have not made the requested changes in January. Such publicity may be even less welcome than the potentially large fines associated with breach of the data protection framework.

The statement made by the ICO highlights once again the importance for companies to review how cookies are used on their websites and how their cookie banners, along with the cookie consent management solution, are displayed. To be compliant, websites must make it as easy as possible for users to reject all advertising cookies. Personalized advertising can be compliant as long as it is based on the user’s consent. In case users reject all advertising cookies, websites can only show general adverts that are not tailored to the users’ browsing history. Consequently, websites should display a cookie banner that makes it as easy for users to reject cookies, as it is for them to accept cookies.

The ICO’s guidance in relation to cookie banners can be found here, which may need to be further updated with the newly presented Data Protection and Digital Information Bill.

First publication: Cyber Law Watch Blog with Sophie Verstraeten

Post-Brexit EU businesses have needed to rethink how they approach showing compliance with a host of regulations, managing international data transfers and building trust with data subjects. Having to comply with the GDPR, prepare for other data protection bills, all while continuing to comply with the EU-GDPR as well as a host of global regulations means businesses might look to certification as a common system for adequacy as a one-stop shop, when addressing the overlaps and more crucially closing the gaps on their privacy compliance programs.

Featured speakers:

  • Noshin Khan, Senior Compliance Counsel, Ethics Center of Excellence, OneTrust 
  • Claude-Étienne Armingaud, Partner, K&L Gates

Register here.

The UK Government has laid adequacy regulations before Parliament that, once in force from 12 October 2023, will permit use of the UK – US “Data Bridge” as a safeguard for personal data transfers from the UK to the US under Article 44 UK GDPR.

The UK – US “Data Bridge,” AKA the UK Extension to the EU – US Data Privacy Framework (Framework), allows UK organisations to transfer personal data to organisations located in the United States that have self-certified their compliance with certain data protection principles and appear on the Data Privacy Framework List. This scheme, administered by the US Department of Commerce, provides a redress mechanism for data subjects in the European Union to enforce their rights under the EU General Data Protection Regulation, in relation to a participating US organisation’s compliance with the Framework, and to US national security agencies’ access to personal data. This new redress mechanism attempts to prevent a challenge to the Framework similar to the Schrems II case, which invalidated the Framework’s predecessor EU – US Privacy Shield. Despite this, the Framework has already been the subject of a short-lived case at the Court of Justice of the EU, and there may be more legal challenges.

Alongside the adequacy regulations, the UK government published an analysis of the US laws relating to US national security agencies’ access to the personal data of European data subjects. This analysis effectively completes the international data transfer risk assessment (TRA), which UK organisations have been required to carry out before transferring personal data to the US. It is likely that UK organisations relying on the other Article 44 UK GDPR safeguards, such as the International Data Transfer Agreement, may also rely on this analysis in place of completing a TRA.

First publication: K&L Gate Cyber Law Watch Blog in collaboration with Noirin McFadden

In this webinar, our lawyers discuss generative artificial intelligence (AI). Fast paced growth in generative AI is changing the way we work and live. With such changes come complex issues and uncertainty. We will address the legal, policy and ethical risks, mitigation, and best practices to consider as you develop generative AI products and services, or use generative AI in the operation of your business.

With Annette Becker, Guillermo Christensen, Whitney McCollum, Jilie Rizzo, and Mark Wittow

If you were not able to join last Tuesday, you can watch the replay below:

Source: K&L Gates Hub

Access the full text of the EU AI Act here.

On 14 June 2023, the European Parliament (Parliament) plenary voted on its position on the Artificial Intelligence Act (AI Act), which was adopted by a large majority, with 499 votes in favor, 28 against, and 93 abstentions. The newly adopted text (Parliament position) will serve as the Parliament’s negotiating position during the forthcoming interinstitutional negotiations (trilogues) with the Council of the European Union (Council) and the European Commission (Commission).

The members of Parliament (MEPs) proposed several changes to the Commission’s proposal, published on 21 April 2021, including expanding the list of high-risk uses and prohibited AI practices. Specific transparency and safety provisions were also added on foundation models and generative AI systems. MEPs also introduced a definition of AI that is aligned with the definition provided by the Organisation for Economic Co-operation and Development. In addition, the text reinforces natural persons’ (or their groups’) right to file a complaint about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights.

Definition

The Parliament position provides that AI, or an AI System, should refer to “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.” This amends the Commission’s proposal, where an AI System was solely limited to software acting for human-defined objectives and now encompasses the metaverses through the explicit inclusion of “virtual environments.”

Agreement on the final version of the definition of AI is expected to be found at the technical level during trilogue negotiations, as it does appear to be a noncontentious item.

Another notable inclusion relates to foundation models (Foundation Models) that were not yet in the public eye when the Commission’s proposal was published and were defined as a subset of AI Systemtrained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.

(more…)

Speakers:

  • Zelda Olentia, Senior Product Manager, RadarFirst
  • Claude-Étienne Armingaud, CIPP/E, Partner, Data Protection Privacy and Security Practice Group Coordinator, K&L Gates LLP

Air Date: Wednesday 14 June at 1 pm ET / 10 am PT. Replay on demand available here!

Description

Gartner predicts that by the end of 2024, 75% of the world’s population will have its personal data covered under modern privacy regulations. This exponential increase from only 10% global coverage in 2020 raises the stakes for global organizations. The challenge will be to ensure compliance, while safeguarding trust for an unprecedented volume of regulated data.

Join the upcoming live Q&A to learn what’s driving this expansion and how to prepare. You’ll hear from Zelda Olentia, Senior Product Manager at RadarFirst, and special guest, Claude-Etienne Armingaud who is a partner at K&L Gates LLP and a coordinator for the Firm’s Data Protection, Privacy, and Security practice group.

In this session we will cover:

  What is driving the expansion of privacy regulation?

  Where are we on this path towards 65% global coverage?

  How do you scale privacy operations for international privacy laws quickly and effectively before year-end 2024?

Register Now >>