K&L Gates LLP covers a myriad of IT and internet issues, from GDRP compliance to contract negotiation. The firm is notable for its expertise in IP and data protection matters, as well as, increasingly, AI, NFT and blockchain issues. The practice is led by Claude-Etienne Armingaud, who is dual-qualified in France and the US, and is consequently well placed to handle multi-jurisdictional transactions.

Practice head(s): Claude-Etienne Armingaud

(more…)

Part IV of our series “Regulating AI: The Potential Impact of Global Regulation of Artificial Intelligence” will focus on recent developments in general availability of AI and how generative AI solutions are leading regulators, at a global level, to consider legal frameworks to protect both individuals affected by AI and digital sovereignty.

The program will feature a panel addressing the EU AI Act, on which a preliminary political agreement was reached last December and unanimously approved by the ambassadors of the 27 countries of the European Union on 2 February 2024, prior to its upcoming final votes.

Like the GDPR before it, the EU AI Act will be a trailblazing piece of legislation which will impact companies at global level.

Our panelists will discuss the consequences of the EU AI Act on companies contemplating the provision of AI solutions in the EU market or leveraging AI in the EU, with a special focus on non-EU companies.

Additional topics in our Regulating AI — The Potential Impact of Global Regulation of Artificial Intelligence series include:  

  • Part I – 13 September 2023 (EU / U.K.) – View Recording
  • Part II – 7 December 2023 (Asia-Pacific Region: China, Hong Kong, Singapore, Japan) – View Recording
  • Part III – 12 December 2023 (United States)

Register or watch the replay here.

The Information Commissioner’s Office (ICO) recently launched a consultation series on how data protection laws should apply to the development and use of generative AI models (“Gen AI”). In the coming months, the ICO will publish further views on how to interpret specific requirements of UK GDPR and Part 2 of the DPA 2018 in relation to Gen AI. This first part of the consultation focusses on whether it is lawful to train Gen AI on personal data scraped from the web. The consultation seeks feedback from stakeholders with an interest in Gen AI.

As outlined by the ICO, web scraping will involve the collection and processing of personal data, which may not have been placed online directly by the data subjects themselves. To comply with the UK GDPR, Gen AI developers would need to ensure there is a valid lawful basis for their processing under UK GDPR, as well as comply with the relevant information requirements pertaining to indirect personal data collection.

For the first part of the consultation series, the ICO published a policy position on the lawful basis for training Gen AI models on web-scraped data which can be found here. More specifically, this consultation focusses on the ‘legitimate interest’ lawful basis under art. 6(1)(f) UK GDPR and the ‘three-part’ test that a data controller must pass to meet the legitimate interest basis (a so-called Legitimate Interest Assessment). The ICO has considered various actions that Gen AI developers could take to meet this three-part legitimate interest test to guarantee that the collection of training data through web scraping, i.e. processing of data, is complaint with the principles of UK GDPR. The ICO would now like to hear from relevant stakeholders on their view of the proposed regulatory approach and the impact this would have on their organisation. A link to the survey can be found here.

The deadline to submit a response is 1 March 2024.

First publication: K&L Gates Cyber Law Watch blog with Sophie Verstraeten

Join our session as we explore the implications of the EU AI Act. In this webinar, we’ll:

Featured speakers

Yücel Hamzaoğlu​

Partner
HHK Legal

Melike Hamzaoğlu

Partner
HHK Legal

Claude-Étienne Armingaud​

Partner
KL Gates

Noshin Khan​

Ethics & Compliance, Associate Director
OneTrust​

Harry Chambers

Senior Privacy Analyst
OneTrust

Register here.

Quoted in Agenda article “New EU AI Rules Will Have Global Impact“:

The scope of the EU AI Act will apply to all companies whose AI systems are used or affect EU-based individuals, according to Claude-Etienne Armingaud, a partner in K&L Gates’ Paris office and a member of the law firm’s technology transactions and sourcing practice group.

Due to its breadth, global companies developing AI systems, most of which are headquartered either in the U.S. or in China, will face two options: “Get in line with the EU AI Act or abstain from the EU market,” Armingaud said.

Some companies threatened to exit the European market after the EU’s General Data Protection Regulation, or GDPR, became effective in 2018, but many didn’t actually follow through, according to Armingaud.

“So, without a doubt, all companies dabbling in AI will need to comply if they truly want to remain global,” he said.

Agenda – New EU AI Rules Will Have Global Impact

This panel session will focus on the growing concern over the ethical use of Artificial Intelligence (AI) and its impact on privacy. The panelists will discuss the role of accountability in developing responsible AI practices and the potential risks of AI systems when not properly regulated. They will also explore the importance of transparency and the need for data privacy regulations in the development and deployment of AI technologies. The session will provide insights into best practices for AI governance and how organizations can ensure the ethical use of AI while still benefiting from its potential.

Co-Panelists:

#AI #ArtificialIntelligence #gdpr #ethics #dataprotection #regulation #insights23 #pecb #Privacy #Accountability

This series of webinars will address the potential impacts of artificial intelligence (AI) regulations on business across the globe. Recent developments in general availability of AI and generative AI solutions are leading regulators, at a global level, to consider legal frameworks to protect both individuals affected by AI and digital sovereignty. Our panelists will address these potential regulatory developments, as well as the expected timeline for these changes, region by region.  

Our first panel will feature a discussion focused on current and future regulatory requirements on the AI industry throughout the EU and the UK. With the language of the EU’s Al Act heading into its trialogue, it is even more important for stakeholders to understand the EU’s approach and prepare for the potential impact of this regulation in Europe, UK, and beyond. The panelists will address key questions, such as:

  • What new undertaking will be bearing on the stakeholders in this industry?
  • Will government regulation be “technology neutral”?
  • Could the various frameworks lead to conflicts for local compliance efforts?
  • Will a requirement for an AI system to explain its thinking or provide substantive sources for all results have a deleterious impact on its ability to “think” independently?  
  • Is it too late for stakeholders to have a say in these expected frameworks?

Speakers:

Claude-Étienne Armingaud | PARTNER | PARIS

Giovanni Campi | POLICY DIRECTOR | BRUSSELS

Jennifer Marsh | PARTNER | LONDON

Register here: K&L Gates Website

Watch the recording here.

August may be perceived as the month where France shuts down for the summer. Yet, just before the summer ’23 holiday, the French Data Protection Authority (“CNIL”) published several call to action for the various players of the data ecosystems in general and in artificial intelligence (AI) in particular, following its 16 May 2023 announcement of an AI action plan:

  • Opening and re-use of publicly accessible data – The CNIL published a draft guidance on the such data usage, and all stakeholders are invited to weight in until 15 October 2023 before its finalization. While non-binding, this guidance is expected to lead the way on how the EU’s Supervisory Authority will apprehend and enforce the General Data Protection Regulation (“GDPR”) when personal data is scraped from online sources and subsequently used for subsequent purposes. This notably focuses on Art. 14 GDPR and the indirect collection of personal data and specific prior information requirements. Artificial Intelligence is explicitly mentioned by the CNIL in the draft, as such data, which feeds large-language models, “undeniably contributes to the development of the digital economy and is at the core of artificial intelligence.” Stakeholders are invited to submit their observations online through the dedicated portal.
  • Artificial Intelligence Sandbox – Following in the footsteps of its connected cameras, EdTech & eHealth initiatives, the CNIL is launching an AI sandbox call for projects, where stakeholders involved in AI in connection with public services may apply to receive dedicated assistance by the regulator to co-construct AI systems complying with data protection and privacy rules.
  • Creation of databases for Artificial Intelligence uses – Open to the broadest possible array of stakeholders (including individuals), this call for contributions notably addresses the specific issue relating to the use of publicly accessible data and aims at informing the CNIL of the various positions at play and how to balance GDPR’s requirements (information, legitimate interests, exercise of rights) with data subjects’ expectations. Stakeholders are invited to submit their observations online through the dedicated form (in French – our free translation in English is available below)- no deadline for submission has been set.
(more…)

In this webinar, our lawyers discuss generative artificial intelligence (AI). Fast paced growth in generative AI is changing the way we work and live. With such changes come complex issues and uncertainty. We will address the legal, policy and ethical risks, mitigation, and best practices to consider as you develop generative AI products and services, or use generative AI in the operation of your business.

With Annette Becker, Guillermo Christensen, Whitney McCollum, Jilie Rizzo, and Mark Wittow

If you were not able to join last Tuesday, you can watch the replay below:

Source: K&L Gates Hub

On 14 June 2023, the European Parliament (Parliament) plenary voted on its position on the Artificial Intelligence Act (AI Act), which was adopted by a large majority, with 499 votes in favor, 28 against, and 93 abstentions. The newly adopted text (Parliament position) will serve as the Parliament’s negotiating position during the forthcoming interinstitutional negotiations (trilogues) with the Council of the European Union (Council) and the European Commission (Commission).

The members of Parliament (MEPs) proposed several changes to the Commission’s proposal, published on 21 April 2021, including expanding the list of high-risk uses and prohibited AI practices. Specific transparency and safety provisions were also added on foundation models and generative AI systems. MEPs also introduced a definition of AI that is aligned with the definition provided by the Organisation for Economic Co-operation and Development. In addition, the text reinforces natural persons’ (or their groups’) right to file a complaint about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights.

Definition

The Parliament position provides that AI, or an AI System, should refer to “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.” This amends the Commission’s proposal, where an AI System was solely limited to software acting for human-defined objectives and now encompasses the metaverses through the explicit inclusion of “virtual environments.”

Agreement on the final version of the definition of AI is expected to be found at the technical level during trilogue negotiations, as it does appear to be a noncontentious item.

Another notable inclusion relates to foundation models (Foundation Models) that were not yet in the public eye when the Commission’s proposal was published and were defined as a subset of AI Systemtrained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.

(more…)