No Cover Image

Conference Paper/Proceeding/Abstract 51 views 10 downloads

The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration

Abdullah Saad Alzahrani Orcid Logo, Muneeb Ahmad Orcid Logo

Proceedings of the 13th International Conference on Human-Agent Interaction, Pages: 332 - 340

Swansea University Author: Muneeb Ahmad Orcid Logo

  • 70864.VoR.pdf

    PDF | Version of Record

    © 2025 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution 4.0 International License.

    Download (1.68MB)

DOI (Published version): 10.1145/3765766.3765792

Abstract

Understanding and modelling how humans develop and maintain trust in robots is crucial for ensuring appropriate trust calibration during Human-Robot Interaction (HRI). This paper presents a mathematical model that simulates a three-layered framework of trust, encompassing dispositional, situational...

Full description

Published in: Proceedings of the 13th International Conference on Human-Agent Interaction
ISBN: 979-8-4007-2178-6
Published: New York, NY, USA ACM 2026
URI: https://cronfa.swan.ac.uk/Record/cronfa70864
first_indexed 2025-11-07T16:01:42Z
last_indexed 2026-01-09T05:31:30Z
id cronfa70864
recordtype SURis
fullrecord <?xml version="1.0"?><rfc1807><datestamp>2026-01-08T14:44:04.9115839</datestamp><bib-version>v2</bib-version><id>70864</id><entry>2025-11-07</entry><title>The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration</title><swanseaauthors><author><sid>9c42fd947397b1ad2bfa9107457974d5</sid><ORCID>0000-0001-8111-9967</ORCID><firstname>Muneeb</firstname><surname>Ahmad</surname><name>Muneeb Ahmad</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2025-11-07</date><deptcode>MACS</deptcode><abstract>Understanding and modelling how humans develop and maintain trust in robots is crucial for ensuring appropriate trust calibration during Human-Robot Interaction (HRI). This paper presents a mathematical model that simulates a three-layered framework of trust, encompassing dispositional, situational and learned trust. This framework aims to estimate human trust in robots during real-time interactions. Our trust model was tested and validated in an experimental setting where participants engaged in a collaborative trust game with a robot over four interactive sessions. Results from mixed-model analysis revealed that both the Trust Perception Score (TPS) and interaction session significantly predicted the Trust Modeled Score (TMS), explaining a substantial portion of the variance in TMS. Statistical analysis demonstrated significant differences in trust across sessions, with mean trust scores showing a clear increase from the first to the final session. Additionally, we observed strong correlations between situational and learned trust layers, demonstrating the model&#x2019;s ability to capture dynamic trust evolution. These findings underscore the potential of this model in developing adaptive robotic behaviours that can respond to changes in human trust levels, ultimately advancing the design of robotic systems capable of real-time trust calibration.</abstract><type>Conference Paper/Proceeding/Abstract</type><journal>Proceedings of the 13th International Conference on Human-Agent Interaction</journal><volume/><journalNumber/><paginationStart>332</paginationStart><paginationEnd>340</paginationEnd><publisher>ACM</publisher><placeOfPublication>New York, NY, USA</placeOfPublication><isbnPrint>979-8-4007-2178-6</isbnPrint><isbnElectronic/><issnPrint/><issnElectronic/><keywords>Trust, Modelling, Measurement, Repeated Interactions, Human-Robot Collaboration</keywords><publishedDay>2</publishedDay><publishedMonth>1</publishedMonth><publishedYear>2026</publishedYear><publishedDate>2026-01-02</publishedDate><doi>10.1145/3765766.3765792</doi><url/><notes/><college>COLLEGE NANME</college><department>Mathematics and Computer Science School</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>MACS</DepartmentCode><institution>Swansea University</institution><apcterm/><funders/><projectreference/><lastEdited>2026-01-08T14:44:04.9115839</lastEdited><Created>2025-11-07T12:13:01.7746057</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Abdullah Saad</firstname><surname>Alzahrani</surname><orcid>0009-0003-6036-1393</orcid><order>1</order></author><author><firstname>Muneeb</firstname><surname>Ahmad</surname><orcid>0000-0001-8111-9967</orcid><order>2</order></author></authors><documents><document><filename>70864__35926__2c859cdd36834c86bf20218d00b45b8d.pdf</filename><originalFilename>70864.VoR.pdf</originalFilename><uploaded>2026-01-08T14:41:11.3609285</uploaded><type>Output</type><contentLength>1760843</contentLength><contentType>application/pdf</contentType><version>Version of Record</version><cronfaStatus>true</cronfaStatus><documentNotes>&#xA9; 2025 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution 4.0 International License.</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>https://creativecommons.org/licenses/by/4.0</licence></document></documents><OutputDurs/></rfc1807>
spelling 2026-01-08T14:44:04.9115839 v2 70864 2025-11-07 The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration 9c42fd947397b1ad2bfa9107457974d5 0000-0001-8111-9967 Muneeb Ahmad Muneeb Ahmad true false 2025-11-07 MACS Understanding and modelling how humans develop and maintain trust in robots is crucial for ensuring appropriate trust calibration during Human-Robot Interaction (HRI). This paper presents a mathematical model that simulates a three-layered framework of trust, encompassing dispositional, situational and learned trust. This framework aims to estimate human trust in robots during real-time interactions. Our trust model was tested and validated in an experimental setting where participants engaged in a collaborative trust game with a robot over four interactive sessions. Results from mixed-model analysis revealed that both the Trust Perception Score (TPS) and interaction session significantly predicted the Trust Modeled Score (TMS), explaining a substantial portion of the variance in TMS. Statistical analysis demonstrated significant differences in trust across sessions, with mean trust scores showing a clear increase from the first to the final session. Additionally, we observed strong correlations between situational and learned trust layers, demonstrating the model’s ability to capture dynamic trust evolution. These findings underscore the potential of this model in developing adaptive robotic behaviours that can respond to changes in human trust levels, ultimately advancing the design of robotic systems capable of real-time trust calibration. Conference Paper/Proceeding/Abstract Proceedings of the 13th International Conference on Human-Agent Interaction 332 340 ACM New York, NY, USA 979-8-4007-2178-6 Trust, Modelling, Measurement, Repeated Interactions, Human-Robot Collaboration 2 1 2026 2026-01-02 10.1145/3765766.3765792 COLLEGE NANME Mathematics and Computer Science School COLLEGE CODE MACS Swansea University 2026-01-08T14:44:04.9115839 2025-11-07T12:13:01.7746057 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Abdullah Saad Alzahrani 0009-0003-6036-1393 1 Muneeb Ahmad 0000-0001-8111-9967 2 70864__35926__2c859cdd36834c86bf20218d00b45b8d.pdf 70864.VoR.pdf 2026-01-08T14:41:11.3609285 Output 1760843 application/pdf Version of Record true © 2025 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution 4.0 International License. true eng https://creativecommons.org/licenses/by/4.0
title The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration
spellingShingle The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration
Muneeb Ahmad
title_short The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration
title_full The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration
title_fullStr The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration
title_full_unstemmed The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration
title_sort The Architecture of Trust: A Three-Layered Mathematical Model for Human-Robot Collaboration
author_id_str_mv 9c42fd947397b1ad2bfa9107457974d5
author_id_fullname_str_mv 9c42fd947397b1ad2bfa9107457974d5_***_Muneeb Ahmad
author Muneeb Ahmad
author2 Abdullah Saad Alzahrani
Muneeb Ahmad
format Conference Paper/Proceeding/Abstract
container_title Proceedings of the 13th International Conference on Human-Agent Interaction
container_start_page 332
publishDate 2026
institution Swansea University
isbn 979-8-4007-2178-6
doi_str_mv 10.1145/3765766.3765792
publisher ACM
college_str Faculty of Science and Engineering
hierarchytype
hierarchy_top_id facultyofscienceandengineering
hierarchy_top_title Faculty of Science and Engineering
hierarchy_parent_id facultyofscienceandengineering
hierarchy_parent_title Faculty of Science and Engineering
department_str School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science
document_store_str 1
active_str 0
description Understanding and modelling how humans develop and maintain trust in robots is crucial for ensuring appropriate trust calibration during Human-Robot Interaction (HRI). This paper presents a mathematical model that simulates a three-layered framework of trust, encompassing dispositional, situational and learned trust. This framework aims to estimate human trust in robots during real-time interactions. Our trust model was tested and validated in an experimental setting where participants engaged in a collaborative trust game with a robot over four interactive sessions. Results from mixed-model analysis revealed that both the Trust Perception Score (TPS) and interaction session significantly predicted the Trust Modeled Score (TMS), explaining a substantial portion of the variance in TMS. Statistical analysis demonstrated significant differences in trust across sessions, with mean trust scores showing a clear increase from the first to the final session. Additionally, we observed strong correlations between situational and learned trust layers, demonstrating the model’s ability to capture dynamic trust evolution. These findings underscore the potential of this model in developing adaptive robotic behaviours that can respond to changes in human trust levels, ultimately advancing the design of robotic systems capable of real-time trust calibration.
published_date 2026-01-02T05:33:49Z
_version_ 1856987022426636288
score 11.096151