We are thrilled to share our newest guide – and honor the 500 global cyber lawyers recognized here.

There is little in life or law that isn’t encoiled in the digital world these days. And these are the lawyers who connect it all – data and security, innovation and inspiration, litigation and exploration.

Defining what exactly a leading cyber lawyer is was part of the mission. The core of this guide are the privacy and data security specialists who began forming this practice well over a decade ago as companies experienced data breaches and attorneys scurried to become designated as privacy specialists.

But a deep dive on what lawyers and firms globally consider to be cybersecurity these days uncovered a world robust with former intelligence officers, hackers, government officials learning to parse competition claims regarding data sets, tech dealmakers turned to for their way around cyber protocols as deal points, litigators who defend Big Tech from claims of biometric privacy invasion. Robustly represented are former prosecutors and other government leaders whose portfolios detail vast experience in cyber crime security and prosecution.

Take TikTok as an example of the vast layers of legal regulation, national security, technology and consumer protection embedded in its affairs. We’re fascinated to watch the legal teams assemble to parse demands it be sold.

Specialists in regulation of drones and autonomous vehicles are represented here, alongside the legal world’s leading minds in national security, who guide on often old battles fought with new weapons.

To create this list – our inaugural edition – we weighed nominations, independent research and views of peers. This guide is 39 percent female and 19 percent inclusive. Those noted by an asterisk are esteemed members of our Hall of Fame

Source: LawDragons

K&L Gates’s expertise in data and tech work has recently seen it advise on matters as diverse as AI and machine learning projects’ impact on personal data retention and transparency and the implications of augmented reality make-up applications and smart fragrances. While the firm has some significant tech companies on board, the client base skews more heavily towards advising more traditional industries through digital transformation.
The data protection, privacy and security practice has multiple leaders, reflecting its wide geographic spread.
Claude-Étienne Armingaud in Paris, who is a dual-qualified French and New York lawyer, is a stand-out name: besides GDPR and privacy compliance, he also has extensive experience advising on tech transactions, for example relating to software, blockchain, connected cars and more. Other partners leading the practice are Cameron Abbott in Melbourne; Shannan Frisbie, Whitney McCollum, David Bateman and Carley Andrews in the firm’s Seattle headquarters; Bruce Heiman in Washington, DC; Limo Cherian in Chicago; Gina Bertolini and Leah Richardson in the Research Triangle Park office in North Carolina; and Sarah Turpin in London.

The K&L Gates practice’s senior ranks grew with the addition of San Francisco partner Michael Stortz, who was formerly at Akin Gump. Thomas Nietsch was promoted to the partnership in Berlin. The firm also hired counsel Veronica Muratori in Milan from Withersworldwide; Avril Love in Los Angeles from Tucker Ellis; and Ulrike Elteste in Frankfurt from Covington & Burling.

Client references


“K&L Gates has deep expertise and knowledge in this area and is always responsive. Advice is always timely and well-considered.”


“Collaboration with K&L Gates is always seamless. The team have deep knowledge of privacy laws and regulations, but they also understand the business impact of their advice. This sets them apart from other firms in the market.”

First publication: Lexology GDR100

A Practice Note highlighting issues to consider when counseling a prospective buyer of an AI company. This Note discusses the primary due diligence issues relating to AI and machine learning (ML) and strategies to mitigate or allocate risks in the context of an M&A transaction. This Note is also helpful for AI company targets that seek to anticipate potential issues. In this Note, the term AI company refers to a company involved in the research, development, or monetization of a product or service that is primarily powered by an ML algorithm or model that creates functionality or utility through the use of AI.

Read the full article on Practical Law, written in collaboration with by Annette Becker, Alex V. Imas, Jake Bernstein, Mark H. Wittow, Melanie Bruneau, Marion Baumann, Kenneth S. Knox, Julie F. Rizzo, Cameron Abbott, Thomas Nietsch, and Nicole H. Buckley.

The ‘young and innovative team’ at K&L Gates LLP has a strong reputation in the market for its ability to handle data protection liability issues in M&A transactions and global data protection compliance mandates. Areas of activity for the practice include machine learning, autonomous driving and blockchain-based services. Claude-Étienne Armingaud heads up the team and is described as a ‘fount of knowledge on the subject of privacy and data protection‘. He is frequently sought out by clients from the software industry for assistance with cross-border technology transactions.

Leading individuals: Claude-Etienne Armingaud – K&L Gates LLP

Practice head(s): Claude-Etienne Armingaud

Other Key Lawyer(s): Camille Scarparo

(more…)

The Information Commissioner’s Office (ICO) has recently published guidance for employers on monitoring workers lawfully, transparently and fairly. The guidance aims to protect workers’ data protection rights and help employers to build trust with workers, customers and service users. With Artificial Intelligence (AI) on the rise, the temptation may be strong for employers to leverage those emerging technologies in that space. This alert summarizes some specific steps employers should prioritise in light of the ICO guidance.

(more…)

Part IV of our series “Regulating AI: The Potential Impact of Global Regulation of Artificial Intelligence” will focus on recent developments in general availability of AI and how generative AI solutions are leading regulators, at a global level, to consider legal frameworks to protect both individuals affected by AI and digital sovereignty.

The program will feature a panel addressing the EU AI Act, on which a preliminary political agreement was reached last December and unanimously approved by the ambassadors of the 27 countries of the European Union on 2 February 2024, prior to its upcoming final votes.

Like the GDPR before it, the EU AI Act will be a trailblazing piece of legislation which will impact companies at global level.

Our panelists will discuss the consequences of the EU AI Act on companies contemplating the provision of AI solutions in the EU market or leveraging AI in the EU, with a special focus on non-EU companies.

Additional topics in our Regulating AI — The Potential Impact of Global Regulation of Artificial Intelligence series include:  

  • Part I – 13 September 2023 (EU / U.K.) – View Recording
  • Part II – 7 December 2023 (Asia-Pacific Region: China, Hong Kong, Singapore, Japan) – View Recording
  • Part III – 12 December 2023 (United States)

Register or watch the replay here.

Access the full text of the EU AI Act here.

The Information Commissioner’s Office (ICO) recently launched a consultation series on how data protection laws should apply to the development and use of generative AI models (“Gen AI”). In the coming months, the ICO will publish further views on how to interpret specific requirements of UK GDPR and Part 2 of the DPA 2018 in relation to Gen AI. This first part of the consultation focusses on whether it is lawful to train Gen AI on personal data scraped from the web. The consultation seeks feedback from stakeholders with an interest in Gen AI.

As outlined by the ICO, web scraping will involve the collection and processing of personal data, which may not have been placed online directly by the data subjects themselves. To comply with the UK GDPR, Gen AI developers would need to ensure there is a valid lawful basis for their processing under UK GDPR, as well as comply with the relevant information requirements pertaining to indirect personal data collection.

For the first part of the consultation series, the ICO published a policy position on the lawful basis for training Gen AI models on web-scraped data which can be found here. More specifically, this consultation focusses on the ‘legitimate interest’ lawful basis under art. 6(1)(f) UK GDPR and the ‘three-part’ test that a data controller must pass to meet the legitimate interest basis (a so-called Legitimate Interest Assessment). The ICO has considered various actions that Gen AI developers could take to meet this three-part legitimate interest test to guarantee that the collection of training data through web scraping, i.e. processing of data, is complaint with the principles of UK GDPR. The ICO would now like to hear from relevant stakeholders on their view of the proposed regulatory approach and the impact this would have on their organisation. A link to the survey can be found here.

The deadline to submit a response is 1 March 2024.

First publication: K&L Gates Cyber Law Watch blog with Sophie Verstraeten

Join our session as we explore the implications of the EU AI Act. In this webinar, we’ll:

Featured speakers

Yücel Hamzaoğlu​

Partner
HHK Legal

Melike Hamzaoğlu

Partner
HHK Legal

Claude-Étienne Armingaud​

Partner
KL Gates

Noshin Khan​

Ethics & Compliance, Associate Director
OneTrust​

Harry Chambers

Senior Privacy Analyst
OneTrust

Register here.

Quoted in Agenda article “New EU AI Rules Will Have Global Impact“:

The scope of the EU AI Act will apply to all companies whose AI systems are used or affect EU-based individuals, according to Claude-Etienne Armingaud, a partner in K&L Gates’ Paris office and a member of the law firm’s technology transactions and sourcing practice group.

Due to its breadth, global companies developing AI systems, most of which are headquartered either in the U.S. or in China, will face two options: “Get in line with the EU AI Act or abstain from the EU market,” Armingaud said.

Some companies threatened to exit the European market after the EU’s General Data Protection Regulation, or GDPR, became effective in 2018, but many didn’t actually follow through, according to Armingaud.

“So, without a doubt, all companies dabbling in AI will need to comply if they truly want to remain global,” he said.

Agenda – New EU AI Rules Will Have Global Impact

It has been some time already since the EU Digital Services Act (Regulation 2022/2065, DSA) was published, and since then, the discussions about Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) have dominated the media coverage (see initial press release of European Commission here and coverage about VLOPs/VLOSEs petitions against categorization as VLOPs/VLOSEs here and here). 

Smaller online service providers tend to forget that they may also face some new obligations under the DSA from 17 February 2024 onwards, but would be well advised to comply to avoid significant sanctions (e.g., fines of up to 6% of the global annual turnover or periodic penalty payments up to 5% of the global average daily turnover). 

The following paragraphs provide a brief summary of the most relevant content of the DSA and will help online service providers to understand:

  • If and to what extent the DSA applies to them;
  • What specific obligations exist; and
  • What sanctions may be applied in case of breach.
(more…)