Repository logo
  • English
  • Español
  • Log In
    Have you forgotten your password?
Universidad Tecnológica Indoamérica
Repository logo
  • Communities & Collections
  • Research Outputs
  • Projects
  • Researchers
  • Statistics
  • Investigación Indoamérica
  • English
  • Español
  • Log In
    Have you forgotten your password?
  1. Home
  2. CRIS
  3. Publications
  4. Risks and Ethical Challenges of Emotional Intelligence in Conversational Agents
 
Options

Risks and Ethical Challenges of Emotional Intelligence in Conversational Agents

Journal
EAI/Springer Innovations in Communication and Computing
Emerging Technologies in Applied Engineering and Education
ISSN
2522-8595
2522-8609
Date Issued
2026
Author(s)
Villacís Guerrero, Jacqueline del Pilar
Facultad de Ingenierías
Type
Resource Types::text::conference output::conference proceedings::conference paper
DOI
10.1007/978-3-032-10310-9_10
URL
https://cris.indoamerica.edu.ec/handle/123456789/9987
Abstract
The development of emotionally intelligent conversational agents has attracted growing interest due to their potential to enhance human–machine interaction. These systems aim to simulate empathy by recognizing and responding to human emotions, enabling more fluid, personalized, and engaging communication. Yet, this simulated empathy raises significant technical, ethical, and social concerns, particularly in domains such as healthcare, education, and commerce, where emotional influence can shape decision-making and user well-being. This article presents a narrative review that critically examines the integration of emotional intelligenceEmotional intelligence into conversational AI. It draws on interdisciplinary literature in artificial intelligenceArtificial intelligence, affective computing, and ethicsEthics, reviewing peer-reviewed sources published between 2015 and 2024. The analysis applied a thematic approach to identify recurrent patterns, conceptual tensions, and sector-specific risks. Findings show that while advances in voice analysis, natural language processing, and deep learning have improved emotion detection, important limitations persist in multicultural and linguistically diverse contexts. These gaps risk misinterpretations, inappropriate responses, or discriminatory interactions. Moreover, users may mistakenly interpret emotionally tailored responses as genuine empathy, fostering emotional confusion or dependence. The potential for manipulative or persuasive uses further complicates their ethical deployment. This study provides a novel contribution by explicitly linking simulated empathy with risks of anthropomorphization, autonomy loss, and regulatory gaps, thereby bridging technical advances with their socio-ethical implications. It highlights practical challenges for applied contexts such as clinical support, education, and digital services, offering insights for both researchers and practitioners. Ethical principles, inclusive design, and regulatory frameworks are needed to ensure that emotionally intelligent AIArtificial intelligence supports rather than exploits human emotional experience. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
Subjects
  • Artificial intelligen...

  • Conversational assist...

  • Emotional intelligenc...

  • Ethics

Views
4
Last Week
2
Acquisition Date
Apr 15, 2026
View Details
google-scholar
Downloads
Logo Universidad Tecnológica Indoamérica Hosting and Support by Logo Scimago

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback