No Cover Image

Conference Paper/Proceeding/Abstract 50 views 10 downloads

The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration

Abdullah Saad Alzahrani Orcid Logo, Muneeb Ahmad Orcid Logo

Proceedings of the 13th International Conference on Human-Agent Interaction, Pages: 332 - 340

Swansea University Author: Muneeb Ahmad Orcid Logo

  • 70864.VoR.pdf

    PDF | Version of Record

    © 2025 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution 4.0 International License.

    Download (1.68MB)

DOI (Published version): 10.1145/3765766.3765792

Abstract

Understanding and modelling how humans develop and maintain trust in robots is crucial for ensuring appropriate trust calibration during Human-Robot Interaction (HRI). This paper presents a mathematical model that simulates a three-layered framework of trust, encompassing dispositional, situational...

Full description

Published in: Proceedings of the 13th International Conference on Human-Agent Interaction
ISBN: 979-8-4007-2178-6
Published: New York, NY, USA ACM 2026
URI: https://cronfa.swan.ac.uk/Record/cronfa70864
Abstract: Understanding and modelling how humans develop and maintain trust in robots is crucial for ensuring appropriate trust calibration during Human-Robot Interaction (HRI). This paper presents a mathematical model that simulates a three-layered framework of trust, encompassing dispositional, situational and learned trust. This framework aims to estimate human trust in robots during real-time interactions. Our trust model was tested and validated in an experimental setting where participants engaged in a collaborative trust game with a robot over four interactive sessions. Results from mixed-model analysis revealed that both the Trust Perception Score (TPS) and interaction session significantly predicted the Trust Modeled Score (TMS), explaining a substantial portion of the variance in TMS. Statistical analysis demonstrated significant differences in trust across sessions, with mean trust scores showing a clear increase from the first to the final session. Additionally, we observed strong correlations between situational and learned trust layers, demonstrating the model’s ability to capture dynamic trust evolution. These findings underscore the potential of this model in developing adaptive robotic behaviours that can respond to changes in human trust levels, ultimately advancing the design of robotic systems capable of real-time trust calibration.
Keywords: Trust, Modelling, Measurement, Repeated Interactions, Human-Robot Collaboration
College: Faculty of Science and Engineering
Start Page: 332
End Page: 340