In the consumer-privacy arena, cases with similar legal theories tend to start as a ripple and then, after some successes testing the theory, emerge as a full-blown wave. Under the current playbook, plaintiffs’ firms identify certain technologies with which consumers regularly interact and then search for statutes and common law theories (often enacted for different purposes long before the technologies even existed) to support class action suits or mass arbitration campaigns.
The past five years contain multiple examples of this exact strategy playing out. Back in 2020, plaintiffs’ firms began weaponizing archaic state and federal wiretap statutes to attack corporations using “session replay” technology, which tracks users’ clicks and mouse movements in an effort to better understand website interactions and customer conversion metrics. A wave of hundreds of class actions swept across the country, clustering mostly in Florida and California. Not long after, similar wiretap theories were used to target website chatbots. Plaintiffs’ counsel then shifted their focus to the use of website pixels, which are pieces of code embedded on a website that track certain activity and, in some cases, connect with a user’s existing social media accounts to facilitate targeted advertising and increase customer conversion. The resulting volume of litigation related to website pixels – several hundred class actions and even more mass arbitration campaigns – flooded state and federal court houses. Over time, that litigation crystalized around certain industries, including healthcare, media, and sports and entertainment (via wiretap laws and another fairly archaic statute, the federal Video Privacy Protection Act). Based on these consumer-privacy litigation trends, it is important to consider what the next wave will be and when it will crash ashore.
Enter artificial intelligence (AI). A tremendous amount of ink has already been spilled about how AI – and generative AI (GenAI) in particular – will reshape corporate operational functions. New AI-based technologies and tools are coming to market with incredible speed and gaining widespread corporate adoption. As was the case with the adoption of third-party analytics software, privacy risks may be starting to materialize.
In the past month, a new trend seems to have emerged in which class action suits are targeting the use of GenAI tools used to provide analytics on customer service calls for call centers. Generally, these tools offer “conversation intelligence” that, according to plaintiffs, through the use of GenAI, can transcribe, summarize and otherwise assist with customer service calls in real time. The origins of this trend actually date back to late 2023 when a case was filed against Google based on a similar product that it developed. See Ambriz, et al. v. Google LLC, 3:23-cv-05437 (N.D. Cal.). In Ambriz, plaintiffs alleged that Google’s tool (i.e., the Google Cloud Contact Center AI) – which provides a virtual agent that interacts with customers, transcribes the conversations and provides a human agent with suggestions and “smart replies” – violated the California Invasion of Privacy Act (CIPA) by eavesdropping on their conversations. After a couple iterations of the complaint, in February 2025, plaintiffs survived a motion to dismiss allowing the case to proceed.
Plaintiffs’ firms now appear to be capitalizing on the early success in Ambriz with several similar cases filed in the last month. As plaintiffs would tell it, these GenAI call center tools “eavesdrop[] on a customer’s call, transcribe[] it using natural language processing, and feed[] the information into its artificial intelligence to read the text, identify patterns, and classify the data.” Plaintiffs claim that, unbeknownst to them and the putative class, these GenAI tools “eavesdrop” on their conversations without their consent (despite being informed that the call may be recorded) in violation of the CIPA, which has long plagued businesses related to their use of the website tools in each of the prior trends mentioned above. Although plaintiffs’ lawyers are seeking relief under CIPA, they have filed suit in multiple district courts around the country, and there are more than a dozen similar state wiretap laws across the country. So far, these suits have targeted the developers of the at-issue GenAI tools but, given the number of current investigations underway targeting the users of those tools, the scope of risk seems set to expand quickly. In fact, there are already daily social media advertisements from plaintiffs’ firms searching for California consumers who have interacted with restaurant call centers. This recent flurry of activity raises the question of whether these claims will become the next wave of wiretap litigation.
The use of AI tools to support operational functions is likely here to stay, but organizations adopting them should be mindful of the attendant risks. AI-driven employee screening tools present new litigation risks based on claims of discrimination and hiring bias. AI agents and email summary tools can create new security vulnerabilities that could ultimately lead to data breach class actions. False or exaggerated representations about the effectiveness of AI tools or collecting/using data in ways that are inconsistent with a company’s privacy policy could lead to regulator enforcement actions, as evidenced by the recent Operation AI Comply initiative by the Federal Trade Commission (FTC). And, as is clear from this new burst of litigation, even AI call recording and transcription tools can lead to class action litigation.
Organizations are not defenseless in facing these risks, particularly when it comes to privacy. For example, when onboarding GenAI tools, organizations are wise to assess the potential indemnification and liability limiting provisions. Further, organizations should consult with outside defense counsel to consider whether similar tools have already been targeted in privacy litigation or are likely to be targeted based on current and historical trends. In addition, good privacy hygiene can go a long way. This includes ensuring privacy policies accurately reflect the collection and use of information, as well as obtaining and documenting express consent from users and customers.
Holland & Knight Can Help
Holland & Knight’s Data Strategy, Security & Privacy Team has decades of experience defending lawsuits involving the loss, theft or misuse of personal information. If you have any questions regarding best practices for handling customer information or defending data privacy litigation, contact the authors or Partner Mark Melodia, chair of Holland & Knight’s Data Strategy, Security & Privacy Team.