Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

A New Trick Could Block the Misuse of Open Source AI

Perplexity AI Reports Explosive Growth as Users Look for Browser Alternatives

ServiceNow, Moveworks deal speaks to M&A opportunity in tech

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » How AI-powered chatbots can make or break consumer trust
Customer Service AI

How AI-powered chatbots can make or break consumer trust

Advanced AI BotBy Advanced AI BotApril 5, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


online shop
Credit: Unsplash/CC0 Public Domain

Chatbots—those little text bubbles that pop up in the corner of so many consumer sites—have long been a fixture in the digital world. Now, the growing popularity of generative AI programs has only supercharged their presence, and their abilities.

Conversations with ChatGPT and similar apps are getting more realistic by the day. Artificial intelligence-powered chatbots are now woven into many businesses’ customer service, outreach and sales approaches.

But how is this widespread AI adoption affecting consumer behavior? That’s a question for Scott Schanke, an assistant professor in UWM’s Lubar College of Business. His work, which is one of several AI-focused research projects at UWM, centers on the design of AI agents for public-facing business interactions, and how different interfaces can make or break consumer trust.

“AI agents (can) fill this sort of human-facing job role,” Schanke said. “Maybe it’s collecting information or facilitating a sale.”

A lot goes into making sure consumers actually finish filling out a form or complete a purchase. Different traits can sway a person’s interaction with a chatbot, and ultimately an organization’s ability to gain their trust.

Exploring how chatbots shape consumer interaction will give businesses valuable insight into the best ways to deploy new AI technologies. This includes other formats as well, such as voice clones. Schanke’s work will also help researchers pinpoint future uses for the technology—both constructive and nefarious.

“The whole idea here is that we need to try and be forward-looking,” Schanke says. “This is sort of an inflection point that we’re starting to see with a lot of these generative AI technologies, where … we don’t really know what the potential downsides are.”

Chatbots in context

For a 2021 study in the journal Information Systems Research, Schanke and colleagues explored how chatbot humanization impacted a customer’s likelihood of accepting an offer. They partnered with a secondhand clothing retailer to automate their clothing buyback process.

Schanke designed a chatbot for the company with varying degrees of human-like qualities. Some versions told jokes, took longer pauses between replies or told the customer their name. Ultimately, anthropomorphism helped the bots secure more sales—but it came with a cost.

Consumers didn’t tend to push back on offers that came from computer-like bots. “Meaning, if you seem more like a bot, I am more willing to take a lower offer because I’m not thinking about any sort of intent behind the offer,” Schanke says. On the other hand, when bots seemed more human, customers focused more on negotiating to get the best price.

In other contexts, such as charitable giving, anthropomorphism also comes with drawbacks. For a report that is currently under review, Schanke partnered with a social justice organization in Minneapolis to deploy a chatbot that interacts with potential donors. Using AI-powered chatbots could help charitable organizations, which typically have fewer resources than corporations, Schanke said.

“A lot of these organizations… it’s hard for them to stay afloat. And I think chatbots could be a way to help them automate certain processes,” he said. But when chatbots appear too human, potential donors are less likely to open their wallets.

The big reason is that charities tend to operate in more emotional contexts. Asking for donations for flood victims or people facing food insecurity, for example, feels much more high-stakes than selling used clothes. “Having high degrees of anthropomorphism as well as high degrees of emotional appeals are counterproductive because it’s already an emotional context and it’s almost too abrasive to people,” Schanke said.

Logical, bot-like approaches, on the other hand, resulted in more conversions in the outreach process. Ultimately, context matters when deploying chatbots in different settings, and it’s important for organizations to know which traits will push or pull consumers away.

How AI-powered chatbots can make or break consumer trust
Scott Schanke, an assistant professor in UWM’s Lubar College of Business, studies the design of artificial intelligence agents for public-facing business interactions, and how different interfaces can make or break consumer trust. Credit: UWM Photo/Elora Hennessey

Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights.
Sign up for our free newsletter and get updates on breakthroughs,
innovations, and research that matter—daily or weekly.

Familiar voices elicit consumer trust

While not as common as chatbots in consumer-facing settings, voice clones are the next frontier in AI-driven interaction. These bots, also known as audio deepfakes, mimic the voices of real human beings. AI voice programs only need a few seconds of audio from a real person talking to generate a hyper-realistic clone that can say just about anything.

“Folks have been using these mostly for parody,” Schanke says, pointing to the many examples of TikTok videos where a celebrity appears to sing a song or recite a speech that they never actually said. But organizations are interested in how this technology could enhance customer support and outreach, much like chatbots.

The question is how much consumers will trust it—and how easily voice cloning can manipulate perception. For a report to be published in Management Science, Schanke and colleagues invited participants to talk with AI voice clones over the phone. They found that bots seemed more trustworthy when they cloned and spoke in the participant’s own voice. And even in scenarios where the researchers told participants that the “person” on the other end was not to be trusted, they still believed what the bot told them when it was using their own voice.

“Even in that situation, when we give them this information, they’re more willing to trust that other party, even when they know that this person is not trustworthy,” Schanke says.

Additionally, in cases where the bots disclosed that they were, in fact, bots, participant trust still remained high. Knowing this could help lead to legislation to protect consumers against nefarious or misleading uses of AI.

Thinking five years ahead

Voice clones have already been used to carry out complex scams, create fake news reports and even rob a bank. Because of how easily they can generate a believable persona with just a few seconds of audio, the technology wields the power to manipulate unsuspecting people and supercharge malicious lies.

But it all depends on how voice clones are used. “My belief is that technology is neither good nor bad,” Schanke said. There is potential for both positive and negative outcomes with generative AI tools—it just matters who’s using them and for what purpose.

As a researcher, Schanke’s goal is to explore the wide variety of possibilities that AI technologies present. Shedding light on how chatbots and voice clones can be used raises awareness for people who work with AI systems. It can also alert the public to the potentially manipulative applications and help make the case for consumer legislation.

“I think it’s really important for us as researchers to think five years ahead,” Schanke said. “How could we potentially protect people, or at least drive transparency that this is a potential risk?”

More information:
Report: Enhancing Nonprofit Operations with AI Chatbots: The Role of Humanization and Emotion

Scott Schanke et al, Digital Lyrebirds: Experimental Evidence That Voice-Based Deep Fakes Influence Trust, Management Science (2024). DOI: 10.1287/mnsc.2022.03316. On SSRN: papers.ssrn.com/sol3/papers.cf … ?abstract_id=4839606

Provided by
University of Wisconsin – Milwaukee

Citation:
How AI-powered chatbots can make or break consumer trust (2025, April 5)
retrieved 7 April 2025
from https://phys.org/news/2025-04-ai-powered-chatbots-consumer.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMIT’s Xstrings facilitates 3D printing parts with embedded actuation | VoxelMatters
Next Article Judge calls out OpenAI’s “straw man” argument in New York Times copyright suit
Advanced AI Bot
  • Website

Related Posts

AI customer service is leaving us all in the dark

June 7, 2025

Hyperscale Data Launches Advanced AI Customer Service Agent

June 6, 2025

Klarna CEO Sebastian Siemiatkowski might be calming his tone on AI replacement – but the future of customer service could be a two-tier nightmare

June 6, 2025
Leave A Reply Cancel Reply

Latest Posts

Jiaxing Train Station By Architect Ma Yansong Is A Model Of People-Centric, Green Urban Design

Hugh Jackman And Sonia Friedman Boldly Bid To Democratize Theater

Men’s Swimwear Gets Casual At Miami Swim Week 2025

Original Prototype for Jane Birkin’s Hermes Bag Consigned to Sotheby’s

Latest Posts

A New Trick Could Block the Misuse of Open Source AI

June 7, 2025

Perplexity AI Reports Explosive Growth as Users Look for Browser Alternatives

June 7, 2025

ServiceNow, Moveworks deal speaks to M&A opportunity in tech

June 7, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.