Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Google, Harvard, MIT are offering free coding courses in 2025: Here are 10 you can start now

Google’s New Tech: This Isn’t a Photo!

Is it an AI-Powered browser to challenge Google Chrome?

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Customer Service AI

Legal gaps in AI are a business risk, not just a compliance issue

By Advanced AI EditorJuly 14, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A new report from Zendesk outlines a growing problem for companies rolling out AI tools: many aren’t ready to manage the risks. The AI Trust Report 2025 finds that while AI is moving into customer service and support, only 23% of companies feel highly prepared to govern it.

ai governance legal

The report highlights concerns ranging from data privacy to model bias. But the core challenge is trust: when customers don’t understand or feel comfortable with how AI is used, they’re less likely to engage. And when companies don’t have frameworks in place, they expose themselves to legal, reputational, and operational fallout.

Compliance isn’t keeping up

One of the biggest concerns for legal teams is the fragmented nature of AI regulation. While the EU’s AI Act has taken center stage globally, many countries and U.S. states are rolling out their own frameworks. That means businesses need to comply with multiple, sometimes conflicting, sets of rules.

According to the report, only 20 percent of companies have a mature governance strategy for generative AI. That leaves most firms scrambling to build processes for consent, data handling, model oversight, and explainability, often after the tools are already in use.

For CISOs and CLOs, this late-stage involvement can be a problem. Legal reviews may come too late to shape system design or vendor choices, increasing the chances of a regulatory misstep.

Shana Simmons, Chief Legal Officer, Zendesk, told Help Net Security: “Our AI governance is built around core principles that apply across legal jurisdictions—like privacy and security by design, transparency and explainability, and customer control. We embed AI-specific governance steps directly into our product development process to ensure that risks are identified and mitigated, while minimizing bottlenecks for the majority of our AI features, which present limited risk.”

AI introduces new types of risk

Researchers outline several AI-specific threats that legal teams and CISOs must understand. These include:

Jailbreaking, where users try to get AI tools to say or do something they shouldn’t
Prompt injection, where attackers manipulate AI behavior through input
Hallucinations, where the AI generates incorrect or fabricated information
Data leakage, where sensitive information ends up in AI outputs

These risks go beyond typical IT threats. For example, if an AI model gives customers wrong answers or leaks personal information, the business could face both legal claims and reputational harm. And if that AI behavior cannot be explained or audited, defending those decisions becomes much harder.

Customers expect oversight

Customers are paying attention. Zendesk cites research showing that customers want to feel “respected, protected, and understood” when they interact with AI. That means companies must go beyond simple disclaimers or checkboxes.

Customers now expect to know when AI is involved, how it works, and what control they have over their data. If those expectations are not met, companies could see increased churn, customer complaints, or even class-action lawsuits—especially in regulated industries like healthcare or finance.

For legal teams, that raises new questions about product design, vendor contracts, and internal accountability. Who owns the risk when AI goes wrong? What happens if an agent relies on a flawed AI recommendation? They are business questions that CLOs and CISOs need to answer together.

What legal leaders can do now

Companies that treat AI governance as an afterthought are putting themselves at risk. For legal teams, the response needs to be proactive, not reactive. That means working closely with CISOs to:

Audit current AI deployments for gaps in transparency, fairness, or consent
Build flexible compliance frameworks that can adapt as laws evolve
Ensure vendors are contractually bound to governance standards
Participate early in AI product planning, not just final reviews

Most importantly, it means helping the business set guardrails. If a customer sues over an AI decision, the company should be able to show how that decision was made, who reviewed it, and what safeguards were in place.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI’s New AI: Crushing Games! 🎮
Next Article AI Search Clickthroughs Aren’t Clicky Enough; How Content Farms Bought The Farm
Advanced AI Editor
  • Website

Related Posts

Salesforce CEO says AI now resolves 85% of customer service, urges shift in US education

July 13, 2025

Study reveals 71% of people prefer human agents for customer service

July 12, 2025

A New Era in Investment Opportunities

July 12, 2025

Comments are closed.

Latest Posts

Homeland Security Targets Chicago’s National Museum of Puerto Rican Arts & Culture

1,600-Year-Old Tomb of Mayan City’s Founding King Discovered in Belize

Centre Pompidou Cancels Caribbean Art Show, Raising Controversy

‘Night at the Museum’ Reboot in the Works

Latest Posts

Google, Harvard, MIT are offering free coding courses in 2025: Here are 10 you can start now

July 14, 2025

Google’s New Tech: This Isn’t a Photo!

July 14, 2025

Is it an AI-Powered browser to challenge Google Chrome?

July 14, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Google, Harvard, MIT are offering free coding courses in 2025: Here are 10 you can start now
  • Google’s New Tech: This Isn’t a Photo!
  • Is it an AI-Powered browser to challenge Google Chrome?
  • Elon Musk teases Tesla’s “most epic demo” by end of year
  • Artificial Lawyer Is On Holiday Until July 28 – Artificial Lawyer

Recent Comments

  1. binance on Veo 3 demo | Sailor and the sea
  2. Index Home on Nvidia, OpenAI, Oracle back UAE-leg of global Stargate AI mega-project
  3. código de indicac~ao binance on [2505.13511] Can AI Freelancers Compete? Benchmarking Earnings, Reliability, and Task Success at Scale
  4. Compte Binance on Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit
  5. Index Home on Artists Through The Eyes Of Artists’ At Pallant House Gallery

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.