性视界

The Digital Data Design Institute at 性视界 is now the 性视界 Business School AI Institute.

The AI Penalty: What We Really Prize in Empathy

Businessman using AI chatbot support on laptop for online customer service

Have you ever received a response from ChatGPT that seems to get you almost too well? A recent preprint review of research, 鈥,鈥 written by a team including , faculty principal investigator in the Digital Emotions Lab at the Digital Data Design Institute at 性视界 (D^3), suggests that short interactions with AI might actually be better at making us feel understood and cared for than our fellow humans, that is, at least until we discover that we鈥檙e talking to a machine. These findings challenge our fundamental assumptions about empathy, emotional support, and what it means to truly connect in an increasingly digital world.

Key Insight: A New Perspective on Understanding Empathy

鈥淓mpathy is in the mind of the beholder.鈥 [1]

Earlier psychological research has focused primarily on the empathizer, studying what makes them more or less empathic, their biases, and their capacity for emotional connection. But the rise of synthetic AI-powered conversation partners flips the perspective, pivoting from the writer (the empathizer) to the receiver (the empathized). While we can鈥檛 meaningfully ask whether an AI truly 鈥榗ares鈥 or 鈥榮hares feelings,鈥 the focus shifts to the recipient鈥檚 perception of empathy, whether that person experiences feeling heard, cared for, and understood.

Key Insight: AI鈥檚 Surprising AI Advantage

鈥淕enerally speaking, people find text generated by modern LLMs to be more empathic than text written by humans.鈥 [2]

Across diverse contexts like crowdsourced workers, crisis-line supporters, and even medical doctors, AI-generated messages often outperform human-written ones on perceived empathy. Why might that be? AI can consistently produce structured, attentive, and validating language. AI doesn鈥檛 get tired or stop trying, and its phrasing can be optimized for clarity and warmth. In short, the authors identify an 鈥淎I Advantage鈥 in the ability to generate more consistently empathetic responses than humans can.

Key Insight: Belief Beats Content

鈥淗owever, as soon as people believe (accurately or not) they are interacting with an AI, they downgrade the value of the text鈥攕omething that we call the 鈥楢I Penalty鈥.鈥 [3]

The flip side is stark: label the very same message as AI, and ratings drop. Termed the 鈥淎I Penalty鈥 by the authors, it is strongest on the dimensions of 鈥渇eeling with鈥 and 鈥渃aring,鈥 precisely where people expect a human鈥檚 emotional labor and intention. The penalty also emerges when people suspect AI involvement in a message they otherwise believed was human. Taken together, the AI Advantage and AI Penalty suggest that people鈥檚 cognitive understanding of AI capabilities conflicts with their emotional preferences for human connection.

Why This Matters

For business leaders and executives, understanding these insights is critical for informed decision-making about customer experience, employee well-being, and technology implementation. Companies might consider hybrid approaches where AI augments human empathy rather than replacing it, such as providing real-time coaching to customer service representatives or helping employees craft more supportive communications. Perhaps most importantly, this research highlights the need for leaders to understand the psychological complexity of human-AI interactions. As AI becomes more sophisticated at mimicking human emotional intelligence, success might not just depend on technical capabilities and deployment, but on navigating the complicated ways that people perceive, value, and respond to digital communications.

References

[1] Desmond C. Ong et al., 鈥淎I-Generated Empathy: Opportunities, limits, and future directions.鈥 PsyArXiv Preprint (September 23, 2025): 4. Preprint DOI: .听

[2] Ong et al., 鈥淎I-Generated Empathy鈥: 5.

[3] Ong et al., 鈥淎I-Generated Empathy鈥: 7.

Meet the Authors

is an Assistant Professor of Psychology at the University of Texas at Austin.

is Assistant Professor of Business Administration at 性视界 Business School, and faculty principal investigator in the Digital Emotions Lab at the Digital Data Design Institute at 性视界 (D^3).

is a Professor in the Department of Psychology at the University of Toronto, with a cross-appointment as Professor in the Department of Marketing at the Rotman School of Management.

is an Associate Professor of psychology at the Hebrew University of Jerusalem.

Engage With Us

Join Our Community

Ready to dive deeper with the HBS AI Institute? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.