Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Announcing the new cluster creation experience for Amazon SageMaker HyperPod

OptAI to Showcase the Future of On-Device AI at IFA 2025

Broadcom (AVGO) Works on Bringing Latest NVIDIA AI technology to VCF

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
AI Search

Up Next in Privacy Litigation: Class Actions Begin to Target Consumer-Facing Companies Using Generative AI Tools | Insights

By Advanced AI EditorJuly 15, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


In the consumer-privacy arena, cases with similar legal theories tend to start as a ripple and then, after some successes testing the theory, emerge as a full-blown wave. Under the current playbook, plaintiffs’ firms identify certain technologies with which consumers regularly interact and then search for statutes and common law theories (often enacted for different purposes long before the technologies even existed) to support class action suits or mass arbitration campaigns.

The past five years contain multiple examples of this exact strategy playing out. Back in 2020, plaintiffs’ firms began weaponizing archaic state and federal wiretap statutes to attack corporations using “session replay” technology, which tracks users’ clicks and mouse movements in an effort to better understand website interactions and customer conversion metrics. A wave of hundreds of class actions swept across the country, clustering mostly in Florida and California. Not long after, similar wiretap theories were used to target website chatbots. Plaintiffs’ counsel then shifted their focus to the use of website pixels, which are pieces of code embedded on a website that track certain activity and, in some cases, connect with a user’s existing social media accounts to facilitate targeted advertising and increase customer conversion. The resulting volume of litigation related to website pixels – several hundred class actions and even more mass arbitration campaigns – flooded state and federal court houses. Over time, that litigation crystalized around certain industries, including healthcare, media, and sports and entertainment (via wiretap laws and another fairly archaic statute, the federal Video Privacy Protection Act). Based on these consumer-privacy litigation trends, it is important to consider what the next wave will be and when it will crash ashore.

Enter artificial intelligence (AI). A tremendous amount of ink has already been spilled about how AI – and generative AI (GenAI) in particular – will reshape corporate operational functions. New AI-based technologies and tools are coming to market with incredible speed and gaining widespread corporate adoption. As was the case with the adoption of third-party analytics software, privacy risks may be starting to materialize.

In the past month, a new trend seems to have emerged in which class action suits are targeting the use of GenAI tools used to provide analytics on customer service calls for call centers. Generally, these tools offer “conversation intelligence” that, according to plaintiffs, through the use of GenAI, can transcribe, summarize and otherwise assist with customer service calls in real time. The origins of this trend actually date back to late 2023 when a case was filed against Google based on a similar product that it developed. See Ambriz, et al. v. Google LLC, 3:23-cv-05437 (N.D. Cal.). In Ambriz, plaintiffs alleged that Google’s tool (i.e., the Google Cloud Contact Center AI) – which provides a virtual agent that interacts with customers, transcribes the conversations and provides a human agent with suggestions and “smart replies” – violated the California Invasion of Privacy Act (CIPA) by eavesdropping on their conversations. After a couple iterations of the complaint, in February 2025, plaintiffs survived a motion to dismiss allowing the case to proceed.

Plaintiffs’ firms now appear to be capitalizing on the early success in Ambriz with several similar cases filed in the last month. As plaintiffs would tell it, these GenAI call center tools “eavesdrop[] on a customer’s call, transcribe[] it using natural language processing, and feed[] the information into its artificial intelligence to read the text, identify patterns, and classify the data.” Plaintiffs claim that, unbeknownst to them and the putative class, these GenAI tools “eavesdrop” on their conversations without their consent (despite being informed that the call may be recorded) in violation of the CIPA, which has long plagued businesses related to their use of the website tools in each of the prior trends mentioned above. Although plaintiffs’ lawyers are seeking relief under CIPA, they have filed suit in multiple district courts around the country, and there are more than a dozen similar state wiretap laws across the country. So far, these suits have targeted the developers of the at-issue GenAI tools but, given the number of current investigations underway targeting the users of those tools, the scope of risk seems set to expand quickly. In fact, there are already daily social media advertisements from plaintiffs’ firms searching for California consumers who have interacted with restaurant call centers. This recent flurry of activity raises the question of whether these claims will become the next wave of wiretap litigation.

The use of AI tools to support operational functions is likely here to stay, but organizations adopting them should be mindful of the attendant risks. AI-driven employee screening tools present new litigation risks based on claims of discrimination and hiring bias. AI agents and email summary tools can create new security vulnerabilities that could ultimately lead to data breach class actions. False or exaggerated representations about the effectiveness of AI tools or collecting/using data in ways that are inconsistent with a company’s privacy policy could lead to regulator enforcement actions, as evidenced by the recent Operation AI Comply initiative by the Federal Trade Commission (FTC). And, as is clear from this new burst of litigation, even AI call recording and transcription tools can lead to class action litigation.

Organizations are not defenseless in facing these risks, particularly when it comes to privacy. For example, when onboarding GenAI tools, organizations are wise to assess the potential indemnification and liability limiting provisions. Further, organizations should consult with outside defense counsel to consider whether similar tools have already been targeted in privacy litigation or are likely to be targeted based on current and historical trends. In addition, good privacy hygiene can go a long way. This includes ensuring privacy policies accurately reflect the collection and use of information, as well as obtaining and documenting express consent from users and customers.

Holland & Knight Can Help

Holland & Knight’s Data Strategy, Security & Privacy Team has decades of experience defending lawsuits involving the loss, theft or misuse of personal information. If you have any questions regarding best practices for handling customer information or defending data privacy litigation, contact the authors or Partner Mark Melodia, chair of Holland & Knight’s Data Strategy, Security & Privacy Team.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHow IBM’s Agentic AI Is Changing the Way Sales Teams Close Deals
Next Article Westinghouse plans to build 10 large nuclear reactors in U.S., interim CEO tells Trump
Advanced AI Editor
  • Website

Related Posts

How a new browser uses AI to search

September 2, 2025

AI Tools Like ChatGPT Triple Usage, Eroding Google’s Search Dominance

September 2, 2025

ChatGPT, AI tools gain traction as Google Search slips

September 2, 2025

Comments are closed.

Latest Posts

Search for Nazi-Looted Art Leads to House Arrest Order in Argentina

Louvre Ends Nintendo 3DS Museum Guide Partnership After Over A Decade

Musée d’Orsay President Dies of Heart Failure at 58

Lindsay Jarvis Makes a Bet on the Bowery

Latest Posts

Announcing the new cluster creation experience for Amazon SageMaker HyperPod

September 3, 2025

OptAI to Showcase the Future of On-Device AI at IFA 2025

September 3, 2025

Broadcom (AVGO) Works on Bringing Latest NVIDIA AI technology to VCF

September 3, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Announcing the new cluster creation experience for Amazon SageMaker HyperPod
  • OptAI to Showcase the Future of On-Device AI at IFA 2025
  • Broadcom (AVGO) Works on Bringing Latest NVIDIA AI technology to VCF
  • OpenAI to roll out parental controls for ChatGPT after safety concerns
  • ChatGPT’s Impact On Our Brains According to an MIT Study

Recent Comments

  1. Chrispik7Nalay on Reverse Engineering The IBM PC110, One PCB At A Time
  2. JasonDiult on The Meta AI site’s feed proves no one knows what to do with AI yet
  3. Ronaldmac4Nalay on AI code suggestions sabotage software supply chain • The Register
  4. Francismary8Nalay on AI code suggestions sabotage software supply chain • The Register
  5. kiki4Nalay on AI code suggestions sabotage software supply chain • The Register

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.