Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Geisinger Health Plan addresses burdensome prior auth process with Cohere Health’s platform

Centari Raises $14m In Total For AI-Driven Deal Intelligence – Updated – Artificial Lawyer

DeepSeek trained its AI for $294,000, far below US costs

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Customer Service AI

Gen AI has risks. How much would you pay for digital trust?

By Advanced AI EditorSeptember 18, 2025No Comments10 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI chats being publicly searchable. People falling in love with customer service bots. The risks of generative AI threaten the rewards — but trust might be a competitive edge.

As AI adoption escalates and enterprises weigh where to put their investment dollars, some of the risks and complications of generative AI are becoming clearer. From anecdotes about people falling into delusions based on chatbot conversations, to struggles with balancing the sycophancy of models like ChatGPT and how people react to changes in models, what Microsoft has called the “age of AI agents” is demanding that businesses which want to use AI also have to navigate uncharted waters when it comes to risk and trust.

That isn’t stopping them — Nvidia research published earlier this year surveyed around 450 telecom professionals around the world, and nearly all of them said that their companies were adopting AI. But some recognition of the risk is emerging as well, with analysis like PwC’s first annual Trust and Safety Outlook, that focuses specifically on the forces that are shaping the integrity and safety of digital landscapes, and what that means for enterprises as they interact with their customers, and vice versa.

While the PwC report covers a number of areas, one of the major themes is the increasing impact and complexity of generative AI and agentic AI. “Despite their immense potential, early deployments of agentic AI have surfaced concerns. From misinformation — like generative systems falsely linking a professor to a bribery scandal — to biased outcomes in recruiting or content moderation, agentic AI has demonstrated how easily outcomes can go off track. These events make clear that AI agents are not plug-and-play solutions,” the report concluded. “They need human-led collaboration and oversight.”

245355307 m
Image: 123RF stock photo

There are some major themes emerging when it comes to the security and reliability of AI models, and how easily that models can be convinced to abandon the guardrails that they are trained on. People are deliberating trying to craft prompts that will override instructions, such as using AI agents to commit fraud, extract sensitive data, and develop phishing or malware attacks. That doesn’t even start to get into headline-grabbing developments like people’s individual chats turning up in searches.

“As AI becomes more pervasive and kind of invades various dimensions of our lives and our work, how we interact with it and how safe and trustworthy it is, has become paramount,” said Dan Hays, a principal with PwC’s Strategy&, who focuses on enterprise strategy and value within its telecom practice. In the telecom space, Hays said, this is manifesting in two primary ways.

“First of all, telcos themselves are already becoming tremendous adopters of AI when it comes to operation things,” Hays said. “It could be using AI to do dynamic network optimization and grooming, or it could be having AI-based customer care to reduce the level of human support that’s required when somebody calls in to pay their bill or ask a question. … And of course, the last thing you want to do is damage your brand with a bad or unsafe AI interaction.”

“The other side of it is, I think many in the telecom industry view AI as a significant business opportunity, whether it’s hosting edge data centers at cell sites, or whether it’s repurposing old central offices to service as AI hubs in local communities. The industry clearly sees AI as a substantial opportunity that fits with their traditional business model — but many in the industry haven’t yet figured out what the risk is, if many of these AI models become deemed unsafe or not trustworthy,” he added.

“The industry is really trying to get its head around the whole issue of agentic AI and the risks that it poses, and how you can put the proper governance, the proper controls around it in order to really ensure that it’s trustworthy.”

AI agents, customers and relationships

Customer service chatbots are one of the most popular use cases for generative AI, and the approach is being widely adopted. That afore-mentioned Nvidia report found that the number one AI investment in telecommunications was for customer experience optimization, with 44% of survey respondents indicating that their companies were putting money into that area of AI use.

What do trust and safety issues look like, when it comes to AI agents in customer interactions? Hays gave several examples: Should AI agents remember everything that a particular customer says to them, or should it “forget” interactions, particularly as years or decades pass? The memory capabilities of bots also relate to the question of, what parameters should be placed on how AI agents are allowed to interact with customers?

“One of our clients was particularly interested in whether their AI agents — bots, effectively — should be allowed to have deeply personal or romantic relationships with their customers,” Hays said.

After all, he pointed out, if people contact customer service and get a friendly AI agent that “remembers” them because they have spoken over multiple years, that AI agent likely knows many things about them: Where they live, possibly their credit history, details of past conversations and frustrations — all of which could lead to conversations that are deeply personal to the human involved.

“Overwhelmingly, elderly people view the AI agents and AI bots as a potentially meaningful part of someone’s life, in a space where they may not have family members nearby or close friends that are alive. It may become an important part of the social fabric that we haven’t really thought about,” Hays said. “And this starts to put businesses at large, and the telcos, in particular, in a bit of an awkward position. What happens if I change my AI bot and it was having a relationship with someone? These are not problems that we’ve really encountered in the past.”

The PwC report concludes that AI models need bespoke testing and tuning to the roles that they are meant to fill — for instance, bots that offer customer service or financial advise need different safety approaches. Safety starts by clearly defining the AI’s role, and what guardrails will help make sure it doesn’t step out of line — even if a human is pushing it to do so.

Online trust issues are worsening. Could businesses step in and better safeguard their corners of the internet?

In the PwC Trust and Safety survey, nearly one in three respondents reported low or very low trust in online platforms in the tech, media and telecom sector. Media and entertainment platforms have to navigate user-generated content, including an onslaught of AI generated content that often purports to be real but isn’t. The sense that online spaces are increasingly dominated by bots talking to each other even a name: The Dead Internet Theory, which posits that bot content has been drowning out humans online for years now, the process has been sped up by LLMs, and that the human-based internet will effectively rot by the end of the decade. (Sam Altman himself recently weighed in on this once-fringe theory.)

Telecom, media and entertainment have a particularly acute problem with AI guardrails and rules because of their focus on content, but other sectors are not immune because they, too, are adopting AI rapidly. End users generally see finance as having robust security measures, but it’s unclear how transparent the use of AI is when it comes to, say, credit scoring, loan approvals or financial advice.

In healthcare, a newly released survey of U.S. residents from Pew Research has shown that people are interested and optimistic about what AI could do in medical research, they are worried about AI deciding who gets care and who doesn’t. Researchers have already documented AI bias, and California has put forward state legislation which would make companies identify when AI makes decisions about whether a human is granted or denied things like healthcare treatments, apartment leases, school admissions, or job; that legislation, however, has stalled.

“As organizations across nearly all industries dive head-first into AI and digital transformations, they’re running into new risks that could undermine the trust they’ve built with consumers. Right now, many don’t have the guardrails or experience to handle these evolving threats — and the ripple effects are being felt across entire companies and industries,” the PwC report said.

However, it seems that people who can, are willing to pay for digital environments and services that they can trust — much like subscribers to paywalled content sites can generally trust what they are getting, while those looking for free news might end up reading information that is garbled or deliberately twisted with the help of AI.

The PwC survey asked how people would respond if a platform introduced a new trust and safety feature, and the response was overwhelmingly positive: 72% of those surveyed said they would be more likely to engage with the platform; 68% would consider purchasing a new product or service from that provider; 61% would explore add-ons, and 59% would be open to related merchandise.

The report called this a “clear signal” that providing users with a trusted environment “also fuels outcomes. … Prioritizing trust can lead to measurable business benefits.”  It cited Reddit’s successful IPO as being partly driven by updates to its content policy to explicitly prohibit hate speech and harassment, and taking resulting action by banning subreddits that consistently violated those policies.

“We’re in an era where people are increasingly questioning whether what they’re seeing online is real, or is fake or is fraudulent,” Hays said. “There’s real value in creating safer, more trustworthy environments. That’s something we’re seeing a lot of interest in.”

The PwC report declared that “Building trust and safety isn’t just a safeguard — it’s a competitive edge. As technology evolves rapidly, regulations tighten and consumers grow more safety-conscious, strong trust and safety practices have become essential. They can unlock real value — faster product launches, less downtime and fewer regulatory fines. … By taking proactive steps now, businesses can build lasting consumer trust, stay ahead of emerging risks and regulatory shifts, and accelerate responsible innovation.”

Four recommendations to reduce risk

The PwC report also laid out a number of recommendations on how organizations can make trust and safety a more fundamental part of their AI strategy. Those were:

Use representative data, by training or fine-tuning AI systems on data that mirrors real-world use cases. This is pretty basic, but a healthcare chatbot should grasp medical terminology, while a telecom service assistant should understand network products and customer scenarios.

Pilot before scaling, because small-scale deployments can reveal context-specific strengths, gaps, and blind spots. Testing an AI assistant with a limited user group can surface recurring misinterpretations before a broader launch.

Monitor from day one by establishing continuous tracking of user interactions, anomalies, and feedback. Metrics such as satisfaction scores, escalation rates, and failure types can provide early signals how to iterate and improve.

Set clear escalation and governance, defining when and how AI hands off to humans. Agents should escalate unanswerable or sensitive queries to human oversight. Assign accountability, implement regular audits, and schedule model updates to ensure responsible performance.

Read more in PwC’s first annual Trust and Safety Outlook.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTesla Model Y may gain an extra 90 miles of range with Panasonic’s next-gen battery
Next Article Roewe M7 DMH Launched, Will Continue to Introduce Deeply Intelligent New Cars_the_and_Bai
Advanced AI Editor
  • Website

Related Posts

72% of Consumers Say Personalization Shapes Where They Bank

September 18, 2025

Yes, AI is killing jobs, and other tech news you may have missed last month

September 17, 2025

Market Growth in Customer Service and What’s Next

September 17, 2025

Comments are closed.

Latest Posts

Jackson Pollock Masterpiece Found to Contain Extinct Manganese Blue

Marian Goodman Adds Edith Dekyndt, New Gagosian Director: Industry Moves

How Much to Pay for Emerging Artists’ Work? Art Adviser Says $15,000 Max

Basquiat Biopic ‘Samo Lives’ Filming in East Village

Latest Posts

Geisinger Health Plan addresses burdensome prior auth process with Cohere Health’s platform

September 18, 2025

Centari Raises $14m In Total For AI-Driven Deal Intelligence – Updated – Artificial Lawyer

September 18, 2025

DeepSeek trained its AI for $294,000, far below US costs

September 18, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Geisinger Health Plan addresses burdensome prior auth process with Cohere Health’s platform
  • Centari Raises $14m In Total For AI-Driven Deal Intelligence – Updated – Artificial Lawyer
  • DeepSeek trained its AI for $294,000, far below US costs
  • Alibaba releases open-source AI agent to rival OpenAI’s flagship Deep Research
  • I tried Nvidia’s AI PC assistant and its mildly useful but not enough to warrant installing it

Recent Comments

  1. Steveeffef on 1 Surging Stock with Promising Prospects and 2 to Keep Off Your Radar
  2. BrentRounk on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. whirlwindneonemu5Nalay on MIT leaders discuss strategy for navigating Trump in private meeting
  4. twirlshadowlynx2Nalay on Curiosity, Grit Matter More Than Ph.D to Work at OpenAI: ChatGPT Boss
  5. Hymangah on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.