Calibrating trust between humans and artificial intelligence systems

Ingram, Martin (2023) Calibrating trust between humans and artificial intelligence systems. PhD thesis, University of Glasgow.

Full text available as:
[thumbnail of 2022IngramPhD.pdf] PDF
Download (3MB)


As machines become increasingly more intelligent, they become more capable of operating with greater degrees of independence from their users. However, appropriate use of these autonomous systems is dependent on appropriate trust from their users. A lack of trust towards an autonomous system will likely lead to the user doubting the capabilities of the system, potentially to the point of disuse. Conversely, too much trust in a system may lead to the user overestimating the capabilities of the system, and potentially result in errors which could have been avoided with appropriate supervision. Thus, appropriate trust is trust which is calibrated to reflect the true performance capabilities of the system. The calibration of trust towards autonomous systems is an area of research of increasing popularity, as more and more intelligent machines are introduced to modern workplaces.

This thesis contains three studies which examine trust towards autonomous technologies. In our first study, in Chapter 2, we used qualitative research methods to explore how participants characterise their trust towards different online technologies. In focus groups, participants discussed a variety of factors which they believed were important when using digital services. We had a particular interest in how they perceived social media platforms, as these services rely upon users continued sharing of their personal information. In our second study, in Chapter 3, using our initial findings we created a human-computer interaction experiment, where participants collaborated with an Autonomous Image Classifier System. In this experiment, we were able to examine the ways that participants placed trust in the classifier during different types of system performance. We also investigated whether users’ trust could be better calibrated by providing different displays of System Confidence Information, to help convey the system’s decision making. In our final study, in Chapter 4, we built directly upon the findings of Chapter 3, by creating an updated version of our human-computer interaction experiment. We provided participants with another cue of system decision making, Gradient-weighted Class Activation Mapping, and investigated whether this cue could promote greater trust towards the classifier. Additionally, we examined whether these cues can improve participants’ subjective understanding of the system’s decision making, as a way of exploring how to improve the interpretability of these systems.

This research contributes to our current understanding of calibrating users’ trust towards autonomous systems, and may be particularly useful when designing Autonomous Image Classifier Systems. While our results were inconclusive, we did find some support for users preferring the more complicated interfaces we provided. Users also reported greater understanding of the classifier’s decision making when provided with the Gradient-weighted Class Activation Mapping cue. Further research may clarify whether this cue is an appropriate method of visualising the decision-making of Autonomous Image Classifier Systems in real-world settings.

Item Type: Thesis (PhD)
Qualification Level: Doctoral
Subjects: B Philosophy. Psychology. Religion > BF Psychology
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Colleges/Schools: College of Science and Engineering
Funder's Name: Economic and Social Research Council (ESRC)
Supervisor's Name: Pollick, Professor Frank
Date of Award: 2023
Depositing User: Theses Team
Unique ID: glathesis:2023-83521
Copyright: Copyright of this thesis is held by the author.
Date Deposited: 05 Apr 2023 09:26
Last Modified: 05 Apr 2023 09:26
Thesis DOI: 10.5525/gla.thesis.83521

Actions (login required)

View Item View Item


Downloads per month over past year