Investigating the Dark Side of Human-like Conversational Agents: Technology-related Dehumanization

dc.contributor.authorManseau, Jasmin
dc.contributor.departmentBusiness
dc.contributor.supervisorJenkin, Tracy
dc.date.accessioned2025-11-05T15:03:25Z
dc.date.available2025-11-05T15:03:25Z
dc.date.issued2025-11-05
dc.degree.grantorQueen's University at Kingstonen
dc.description.abstractHuman-like conversational agents are designed to engage with people in ways that mimic natural human interaction. These technologies, including virtual assistants, chatbots and generative artificial intelligence applications, leverage artificial intelligence techniques such as natural language processing and machine learning to facilitate bi-directional interactions. Their adoption is rapidly growing across organizations, with examples like Microsoft Copilot assisting programmers with coding, ChatGPT supporting tasks such as brainstorming and Google Assistant managing scheduling and calendars via voice interactions (Hern, 2022; Jansen, 2023; Peng et al., 2023). However, as these systems become more human-like, significant concerns arise regarding their impact on individuals, groups and society (Al-Amoudi, 2022). This dissertation explores the darker side of human-like conversational agents, focusing on their implications for human interactions. Chapter 2 is a literature review investigating dehumanization in the context of human-like conversational agents, offering insights into how these technologies influence and shape individuals and groups. Chapter 3 provides an empirical investigation into self-dehumanization, studying how individuals' perceptions of their humanity are impacted when engaging with these systems. Chapter 4 uses an agent-based simulation to examine how human-like CAs shape task performance, collaboration patterns and workplace interactions across prediction and judgment tasks within organizations. By integrating conceptual development, empirical findings and a simulation, this dissertation contributes to understanding the unintended human consequences of human-like CAs use, emphasizing how these systems may reshape human interactions and impact organizations.
dc.description.degreePhD
dc.identifier.urihttps://hdl.handle.net/1974/35987
dc.language.isoeng
dc.relation.ispartofseriesCanadian thesesen
dc.rightsAttribution 4.0 International*
dc.rightsAttribution 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectconversational agents
dc.subjectartificial intelligence
dc.subjectdehumanization
dc.subjectagent-based model
dc.subjectreinforcement learning
dc.titleInvestigating the Dark Side of Human-like Conversational Agents: Technology-related Dehumanization
dc.typethesisen

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Manseau_Jasmin_202511_PhD.pdf
Size:
7.05 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.67 KB
Format:
Item-specific license agreed upon to submission
Description: