After a service failure, customers expect empathy. When a human employee understands a customer’s frustration and shows they share that feeling, it can calm tensions and rebuild trust. But new research suggests that when a chatbot tries the same tactic, it can backfire.
A study co-authored by Dezhi Yin, associate professor of information systems at the University of South Florida, finds that empathetic responses from AI-powered service chatbots can unintentionally worsen customer reactions. The research is published in MIS Quarterly.
Across three experiments, including interactions with a live large language model based chatbot, the researchers examined how customers respond when chatbots acknowledge and mirror users’ negative emotions. Instead of soothing customers, these empathetic chatbot messages often triggered psychological reactance, a negative emotional response that occurs when people feel their sense of control is threatened or their boundaries are crossed.
Customers reacted negatively to the idea that a nonhuman system could recognize and respond to their emotions. That discomfort made the chatbot seem less competent and reduced overall perceptions of service quality and customer satisfaction.
The contrast with human agents was striking. When empathy came from a person, it remained effective and beneficial.
The findings suggest that customers hold different expectations for humans and artificial intelligence, particularly around emotional awareness. Making chatbots more human-like is not always the right strategy, especially in sensitive service recovery situations.
Authors: Elizabeth Han, McGill University; Dezhi Yin, University of South Florida; Han Zhang, Hong Kong Baptist University and Georgia Institute of Technology.

