Conference Paper/Proceeding/Abstract 4 views
Optimising Human Trust in Robots: A Reinforcement Learning Approach
ACM Proceedings
Swansea University Authors: Abdullah Alzahrani, Muneeb Ahmad
Abstract
This study explores optimising human-robot trust using reinforcement learning (RL) in simulated environments. Establishing trust in human-robot interaction (HRI) is crucial for effective collaboration, but misaligned trust levels can restrict successful task completion. Current RL approaches mainlyp...
Published in: | ACM Proceedings |
---|---|
Published: |
|
URI: | https://cronfa.swan.ac.uk/Record/cronfa68696 |
first_indexed |
2025-01-15T12:26:10Z |
---|---|
last_indexed |
2025-01-15T20:37:15Z |
id |
cronfa68696 |
recordtype |
SURis |
fullrecord |
<?xml version="1.0"?><rfc1807><datestamp>2025-01-15T12:26:08.2670444</datestamp><bib-version>v2</bib-version><id>68696</id><entry>2025-01-15</entry><title>Optimising Human Trust in Robots: A Reinforcement Learning Approach</title><swanseaauthors><author><sid>d2f9f67e9bfd515f861a917fe1d00321</sid><firstname>Abdullah</firstname><surname>Alzahrani</surname><name>Abdullah Alzahrani</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>9c42fd947397b1ad2bfa9107457974d5</sid><ORCID>0000-0001-8111-9967</ORCID><firstname>Muneeb</firstname><surname>Ahmad</surname><name>Muneeb Ahmad</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2025-01-15</date><abstract>This study explores optimising human-robot trust using reinforcement learning (RL) in simulated environments. Establishing trust in human-robot interaction (HRI) is crucial for effective collaboration, but misaligned trust levels can restrict successful task completion. Current RL approaches mainlyprioritise performance metrics without directly addressing trust management. To bridge this gap, we integrated a validated mathematical trust model into an RL framework and conducted experiments in two simulated environments: Frozen Lake and Battleship. The results showed that the RL model facilitated trust by dynamically adjusting it based on task outcomes, enhancing task performance and reducing the risks of insufficient or extreme trust. Our findings highlight the potential of RL to enhance human-robot collaboration (HRC) and trust calibration in different experimental HRI settings.</abstract><type>Conference Paper/Proceeding/Abstract</type><journal>ACM Proceedings</journal><volume/><journalNumber/><paginationStart/><paginationEnd/><publisher/><placeOfPublication/><isbnPrint/><isbnElectronic/><issnPrint/><issnElectronic/><keywords/><publishedDay>0</publishedDay><publishedMonth>0</publishedMonth><publishedYear>0</publishedYear><publishedDate>0001-01-01</publishedDate><doi/><url/><notes/><college>COLLEGE NANME</college><CollegeCode>COLLEGE CODE</CollegeCode><institution>Swansea University</institution><apcterm/><funders/><projectreference/><lastEdited>2025-01-15T12:26:08.2670444</lastEdited><Created>2025-01-15T12:23:30.0099957</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Abdullah</firstname><surname>Alzahrani</surname><order>1</order></author><author><firstname>Muneeb</firstname><surname>Ahmad</surname><orcid>0000-0001-8111-9967</orcid><order>2</order></author></authors><documents/><OutputDurs/></rfc1807> |
spelling |
2025-01-15T12:26:08.2670444 v2 68696 2025-01-15 Optimising Human Trust in Robots: A Reinforcement Learning Approach d2f9f67e9bfd515f861a917fe1d00321 Abdullah Alzahrani Abdullah Alzahrani true false 9c42fd947397b1ad2bfa9107457974d5 0000-0001-8111-9967 Muneeb Ahmad Muneeb Ahmad true false 2025-01-15 This study explores optimising human-robot trust using reinforcement learning (RL) in simulated environments. Establishing trust in human-robot interaction (HRI) is crucial for effective collaboration, but misaligned trust levels can restrict successful task completion. Current RL approaches mainlyprioritise performance metrics without directly addressing trust management. To bridge this gap, we integrated a validated mathematical trust model into an RL framework and conducted experiments in two simulated environments: Frozen Lake and Battleship. The results showed that the RL model facilitated trust by dynamically adjusting it based on task outcomes, enhancing task performance and reducing the risks of insufficient or extreme trust. Our findings highlight the potential of RL to enhance human-robot collaboration (HRC) and trust calibration in different experimental HRI settings. Conference Paper/Proceeding/Abstract ACM Proceedings 0 0 0 0001-01-01 COLLEGE NANME COLLEGE CODE Swansea University 2025-01-15T12:26:08.2670444 2025-01-15T12:23:30.0099957 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Abdullah Alzahrani 1 Muneeb Ahmad 0000-0001-8111-9967 2 |
title |
Optimising Human Trust in Robots: A Reinforcement Learning Approach |
spellingShingle |
Optimising Human Trust in Robots: A Reinforcement Learning Approach Abdullah Alzahrani Muneeb Ahmad |
title_short |
Optimising Human Trust in Robots: A Reinforcement Learning Approach |
title_full |
Optimising Human Trust in Robots: A Reinforcement Learning Approach |
title_fullStr |
Optimising Human Trust in Robots: A Reinforcement Learning Approach |
title_full_unstemmed |
Optimising Human Trust in Robots: A Reinforcement Learning Approach |
title_sort |
Optimising Human Trust in Robots: A Reinforcement Learning Approach |
author_id_str_mv |
d2f9f67e9bfd515f861a917fe1d00321 9c42fd947397b1ad2bfa9107457974d5 |
author_id_fullname_str_mv |
d2f9f67e9bfd515f861a917fe1d00321_***_Abdullah Alzahrani 9c42fd947397b1ad2bfa9107457974d5_***_Muneeb Ahmad |
author |
Abdullah Alzahrani Muneeb Ahmad |
author2 |
Abdullah Alzahrani Muneeb Ahmad |
format |
Conference Paper/Proceeding/Abstract |
container_title |
ACM Proceedings |
institution |
Swansea University |
college_str |
Faculty of Science and Engineering |
hierarchytype |
|
hierarchy_top_id |
facultyofscienceandengineering |
hierarchy_top_title |
Faculty of Science and Engineering |
hierarchy_parent_id |
facultyofscienceandengineering |
hierarchy_parent_title |
Faculty of Science and Engineering |
department_str |
School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science |
document_store_str |
0 |
active_str |
0 |
description |
This study explores optimising human-robot trust using reinforcement learning (RL) in simulated environments. Establishing trust in human-robot interaction (HRI) is crucial for effective collaboration, but misaligned trust levels can restrict successful task completion. Current RL approaches mainlyprioritise performance metrics without directly addressing trust management. To bridge this gap, we integrated a validated mathematical trust model into an RL framework and conducted experiments in two simulated environments: Frozen Lake and Battleship. The results showed that the RL model facilitated trust by dynamically adjusting it based on task outcomes, enhancing task performance and reducing the risks of insufficient or extreme trust. Our findings highlight the potential of RL to enhance human-robot collaboration (HRC) and trust calibration in different experimental HRI settings. |
published_date |
0001-01-01T20:37:15Z |
_version_ |
1821348657232871424 |
score |
11.04748 |