Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Data Reveals AI Search Dominance Is False Narrative, So Far 08/28/2025

Nvidia says two mystery customers accounted for 39% of Q2 revenue

Elon Musk’s xAI Hits Ex-Employee With Lawsuit Claiming Trade Secrets Ended Up At OpenAI

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Customer Service AI

Legal gaps in AI are a business risk, not just a compliance issue

By Advanced AI EditorJuly 14, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A new report from Zendesk outlines a growing problem for companies rolling out AI tools: many aren’t ready to manage the risks. The AI Trust Report 2025 finds that while AI is moving into customer service and support, only 23% of companies feel highly prepared to govern it.

ai governance legal

The report highlights concerns ranging from data privacy to model bias. But the core challenge is trust: when customers don’t understand or feel comfortable with how AI is used, they’re less likely to engage. And when companies don’t have frameworks in place, they expose themselves to legal, reputational, and operational fallout.

Compliance isn’t keeping up

One of the biggest concerns for legal teams is the fragmented nature of AI regulation. While the EU’s AI Act has taken center stage globally, many countries and U.S. states are rolling out their own frameworks. That means businesses need to comply with multiple, sometimes conflicting, sets of rules.

According to the report, only 20 percent of companies have a mature governance strategy for generative AI. That leaves most firms scrambling to build processes for consent, data handling, model oversight, and explainability, often after the tools are already in use.

For CISOs and CLOs, this late-stage involvement can be a problem. Legal reviews may come too late to shape system design or vendor choices, increasing the chances of a regulatory misstep.

Shana Simmons, Chief Legal Officer, Zendesk, told Help Net Security: “Our AI governance is built around core principles that apply across legal jurisdictions—like privacy and security by design, transparency and explainability, and customer control. We embed AI-specific governance steps directly into our product development process to ensure that risks are identified and mitigated, while minimizing bottlenecks for the majority of our AI features, which present limited risk.”

AI introduces new types of risk

Researchers outline several AI-specific threats that legal teams and CISOs must understand. These include:

Jailbreaking, where users try to get AI tools to say or do something they shouldn’t
Prompt injection, where attackers manipulate AI behavior through input
Hallucinations, where the AI generates incorrect or fabricated information
Data leakage, where sensitive information ends up in AI outputs

These risks go beyond typical IT threats. For example, if an AI model gives customers wrong answers or leaks personal information, the business could face both legal claims and reputational harm. And if that AI behavior cannot be explained or audited, defending those decisions becomes much harder.

Customers expect oversight

Customers are paying attention. Zendesk cites research showing that customers want to feel “respected, protected, and understood” when they interact with AI. That means companies must go beyond simple disclaimers or checkboxes.

Customers now expect to know when AI is involved, how it works, and what control they have over their data. If those expectations are not met, companies could see increased churn, customer complaints, or even class-action lawsuits—especially in regulated industries like healthcare or finance.

For legal teams, that raises new questions about product design, vendor contracts, and internal accountability. Who owns the risk when AI goes wrong? What happens if an agent relies on a flawed AI recommendation? They are business questions that CLOs and CISOs need to answer together.

What legal leaders can do now

Companies that treat AI governance as an afterthought are putting themselves at risk. For legal teams, the response needs to be proactive, not reactive. That means working closely with CISOs to:

Audit current AI deployments for gaps in transparency, fairness, or consent
Build flexible compliance frameworks that can adapt as laws evolve
Ensure vendors are contractually bound to governance standards
Participate early in AI product planning, not just final reviews

Most importantly, it means helping the business set guardrails. If a customer sues over an AI decision, the company should be able to show how that decision was made, who reviewed it, and what safeguards were in place.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI’s New AI: Crushing Games! 🎮
Next Article AI Search Clickthroughs Aren’t Clicky Enough; How Content Farms Bought The Farm
Advanced AI Editor
  • Website

Related Posts

Intuit’s 9-Month Agentic AI Pivot with GenOS Transforms Fintech Efficiency

August 29, 2025

AI Making Call Center Agents Better or Replacing

August 29, 2025

AI is returning to Taco Bell and McDonald’s drive-thrus – will customers bite this time?

August 29, 2025

Comments are closed.

Latest Posts

Woodmere Art Museum Sues Trump Administration Over Canceled IMLS Grant

Barbara Gladstone’s Chelsea Townhouse in NYC Sells for $13.1 M.

Trump Meets with Smithsonian Leader Amid Threats of Content Review

Australian School Faces Pushback over AI Art Course—and More Art News

Latest Posts

Data Reveals AI Search Dominance Is False Narrative, So Far 08/28/2025

August 31, 2025

Nvidia says two mystery customers accounted for 39% of Q2 revenue

August 30, 2025

Elon Musk’s xAI Hits Ex-Employee With Lawsuit Claiming Trade Secrets Ended Up At OpenAI

August 30, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Data Reveals AI Search Dominance Is False Narrative, So Far 08/28/2025
  • Nvidia says two mystery customers accounted for 39% of Q2 revenue
  • Elon Musk’s xAI Hits Ex-Employee With Lawsuit Claiming Trade Secrets Ended Up At OpenAI
  • Generative AI Shifts Startup Bottleneck to Product Management
  • Tesla to make app change for easier communication following Service

Recent Comments

  1. AlvinVen on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. JosephHar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. JosephHar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Trade 1.9 Folex on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. JosephHar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.