No Cover Image

Journal article 57 views

A Meta-analysis of Vulnerability and Trust in Human-Robot Interaction

Peter E. McKenna Orcid Logo, Muneeb Ahmad Orcid Logo, Tafadzwa Maisva Orcid Logo, Birthe Nesset Orcid Logo, Katrin Lohan Orcid Logo, Helen Hastie Orcid Logo

ACM Transactions on Human-Robot Interaction

Swansea University Author: Muneeb Ahmad Orcid Logo

  • 65907.AAM.pdf

    PDF | Accepted Manuscript

    Author accepted manuscript document released under the terms of a Creative Commons CC-BY licence using the Swansea University Research Publications Policy (rights retention).

    Download (8.23MB)

Check full text

DOI (Published version): 10.1145/3658897

Abstract

In human-robot interaction studies, trust is often defined as a process whereby a trustor makes themselves \emph{vulnerable} to a trustee. The role of vulnerability however is often overlooked in this process but could play an important role in the gaining and maintenance of trust between users and...

Full description

Published in: ACM Transactions on Human-Robot Interaction
ISSN: 2573-9522
Published: Association for Computing Machinery (ACM) 2024
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa65907
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract: In human-robot interaction studies, trust is often defined as a process whereby a trustor makes themselves \emph{vulnerable} to a trustee. The role of vulnerability however is often overlooked in this process but could play an important role in the gaining and maintenance of trust between users and robots. To better understand how vulnerability affects human-robot trust, we first reviewed the literature to create a conceptual model of vulnerability with four vulnerability categories. We then performed a meta-analysis, first to check the overall contribution of the variables included on trust. The results showed that overall, the variables investigated in our sample of studies have a positive impact on trust. We then conducted two multilevel moderator analysis to assess the effect of vulnerability on trust, including: 1) An intercept model that considers the relationship between our vulnerability categories; and 2) A non-intercept model that treats each vulnerability category as an independent predictor. Only model 2 was significant, suggesting that to build trust effectively, research should focus on improving robot performance in situations where the users is unsure how reliable the robot will be. As our vulnerability variable is derived from studies of human-robot interaction and human-human studies of risk, we relate our findings to these domains and make suggestions for future research avenues.
Keywords: vulnerability, trust, risk, human-robot interaction
College: Faculty of Science and Engineering
Funders: UKRI, EPSRC (EP/V026682/1, EP/R026173/1)