Bailey, Morgan Elizabeth (2025) Human or machine? Exploring how anthropomorphism, performance and social intelligence impact trust in human-AI teams. PhD thesis, University of Glasgow.
Full text available as:![]() |
PDF
Download (7MB) |
Abstract
In an era where Artificial Intelligence is becoming integral to human teams, understanding the role of trust in Human-AI Teams is essential for effective collaboration. This thesis investigates how anthropomorphism, AI system performance, and social intelligence influence trust calibration, team performance, and human perceptions of AI teammates. The research addresses significant gaps in Human-AI Team literature by drawing on interdisciplinary insights from psychology, computing science, and human-computer interaction. The work is structured into six chapters, each contributing to a comprehensive understanding of trust in Human-AI Teams.
Chapter 1 provides a literature review on the dynamics of human-agent teams, trust, and social intelligence. It explores how anthropomorphic design, AI reliability, and social intelligence contribute to trust development, highlighting the limitations of existing theories and the need for a multidisciplinary approach.
Chapter 2 presents a bibliometric analysis of trust research from 1922 to 2021. By analysing 39,628 documents, this chapter identifies key research trends, foundational contributions, and interdisciplinary intersections. The study reveals the evolving nature of trust research and underscores the importance of integrating diverse disciplinary insights to address complex trust dynamics in Human-AI Teams.
Chapter 3 explores the impact of anthropomorphism and AI system reliability on trust and performance in Human-AI Teams. Using experimental methods, it demonstrates that while anthropomorphic design can enhance trust, this effect is contingent on AI reliability. The findings highlight the risks of overtrust when anthropomorphic cues are paired with unreliable AI systems.
Chapter 4 investigates the role of emojis and AI reliability in shaping team performance and trust. Results show that AI teammates using emojis can foster a sense of social connection and trust, but this effect varies based on the system's reliability. The study emphasises the nuanced relationship between social cues and trust calibration.
Chapter 5 examines how social alignment in AI, the ability to adapt behaviour to match human social expectations, affects trust and team behaviours. Findings indicate that AI that demonstrates adaptive social alignment behaviour can benefit trust. However, misaligned social AI can lead to mistrust and reduced performance and has more impactful effects.
Chapter 6 synthesises the key findings, offering conclusions and practical recommendations. The research underscores the importance of calibrated trust, ensuring humans neither over-rely nor under-rely on AI. Effective Human-AI Teams require AI systems that balance anthropomorphic design, transparency, and social intelligence to foster sustainable trust. The chapter highlights the need for ongoing interdisciplinary research and ethical considerations to guide the development of AI teammates.
Overall, this thesis contributes to understanding trust dynamics in Human-AI Teams by demonstrating that successful collaboration hinges on the careful integration of anthropomorphic cues, system reliability, and social intelligence. The findings provide information for designing AI systems that are not only reliable but also socially intelligent, fostering more effective and ethical human-AI Teams.
Item Type: | Thesis (PhD) |
---|---|
Qualification Level: | Doctoral |
Subjects: | B Philosophy. Psychology. Religion > BF Psychology Q Science > QA Mathematics > QA75 Electronic computers. Computer science Q Science > QA Mathematics > QA76 Computer software T Technology > T Technology (General) |
Colleges/Schools: | College of Medical Veterinary and Life Sciences > School of Psychology & Neuroscience College of Science and Engineering > School of Computing Science |
Funder's Name: | Engineering and Physical Sciences Research Council (EPSRC) |
Supervisor's Name: | Pollick, Professor Frank |
Date of Award: | 2025 |
Depositing User: | Theses Team |
Unique ID: | glathesis:2025-85469 |
Copyright: | Copyright of this thesis is held by the author. |
Date Deposited: | 24 Sep 2025 15:27 |
Last Modified: | 24 Sep 2025 15:29 |
Thesis DOI: | 10.5525/gla.thesis.85469 |
URI: | https://theses.gla.ac.uk/id/eprint/85469 |
Actions (login required)
![]() |
View Item |
Downloads
Downloads per month over past year