No Cover Image

Journal article 211 views 10 downloads

Seeing eye to eye: trustworthy embodiment for task-based conversational agents

David A. Robb, José Lopes, Muneeb Ahmad Orcid Logo, Peter E. McKenna, Xingkun Liu, Katrin Lohan, Helen Hastie

Frontiers in Robotics and AI, Volume: 10

Swansea University Author: Muneeb Ahmad Orcid Logo

  • 64061.VOR.pdf

    PDF | Version of Record

    © 2023 Robb, Lopes, Ahmad, McKenna, Liu, Lohan and Hastie. Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).

    Download (27.87MB)

Abstract

Smart speakers and conversational agents have been accepted into our homes for a number of tasks such as playing music, interfacing with the internet of things, and more recently, general chit-chat. However, they have been less readily accepted in our workplaces. This may be due to data privacy and...

Full description

Published in: Frontiers in Robotics and AI
ISSN: 2296-9144
Published: Frontiers Media SA
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa64061
Tags: Add Tag
No Tags, Be the first to tag this record!
first_indexed 2023-08-09T08:11:40Z
last_indexed 2023-08-09T08:11:40Z
id cronfa64061
recordtype SURis
fullrecord <?xml version="1.0" encoding="utf-8"?><rfc1807 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><bib-version>v2</bib-version><id>64061</id><entry>2023-08-09</entry><title>Seeing eye to eye: trustworthy embodiment for task-based conversational agents</title><swanseaauthors><author><sid>9c42fd947397b1ad2bfa9107457974d5</sid><ORCID>0000-0001-8111-9967</ORCID><firstname>Muneeb</firstname><surname>Ahmad</surname><name>Muneeb Ahmad</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2023-08-09</date><deptcode>SCS</deptcode><abstract>Smart speakers and conversational agents have been accepted into our homes for a number of tasks such as playing music, interfacing with the internet of things, and more recently, general chit-chat. However, they have been less readily accepted in our workplaces. This may be due to data privacy and security concerns that exist with commercially available smart speakers. However, one of the reasons for this may be that a smart speaker is simply too abstract and does not portray the social cues associated with a trustworthy work colleague. Here, we present an in-depth mixed method study, in which we investigate this question of embodiment in a serious task-based work scenario of a first responder team. We explore the concepts of trust, engagement, cognitive load, and human performance using a humanoid head style robot, a commercially available smart speaker, and a specially developed dialogue manager. Studying the effect of embodiment on trust, being a highly subjective and multi-faceted phenomena, is clearly challenging, and our results indicate that potentially, the robot, with its anthropomorphic facial features, expressions, and eye gaze, was trusted more than the smart speaker. In addition, we found that embodying a conversational agent helped increase task engagement and performance compared to the smart speaker. This study indicates that embodiment could potentially be useful for transitioning conversational agents into the workplace, and further in situ, “in the wild” experiments with domain workers could be conducted to confirm this.</abstract><type>Journal Article</type><journal>Frontiers in Robotics and AI</journal><volume>10</volume><journalNumber/><paginationStart/><paginationEnd/><publisher>Frontiers Media SA</publisher><placeOfPublication/><isbnPrint/><isbnElectronic/><issnPrint/><issnElectronic>2296-9144</issnElectronic><keywords>Conversational agent, remote robots, autonomous systems, human–robot teaming, social robotics, user engagement, cognitive load</keywords><publishedDay>0</publishedDay><publishedMonth>0</publishedMonth><publishedYear>0</publishedYear><publishedDate>0001-01-01</publishedDate><doi>10.3389/frobt.2023.1234767</doi><url>http://dx.doi.org/10.3389/frobt.2023.1234767</url><notes/><college>COLLEGE NANME</college><department>Computer Science</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>SCS</DepartmentCode><institution>Swansea University</institution><apcterm/><funders>Funders of this work are the UKRI EPSRC ORCA Hub (EP/R026173/1) and the UKRI EPSRC Trustworthy 732 Autonomous Systems (TAS) Node on Trust (EP/V026682/1).</funders><projectreference>EP/R026173/1, EP/V026682/1</projectreference><lastEdited>2023-09-11T12:00:10.5252117</lastEdited><Created>2023-08-09T09:07:58.3277031</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>David A.</firstname><surname>Robb</surname><order>1</order></author><author><firstname>José</firstname><surname>Lopes</surname><order>2</order></author><author><firstname>Muneeb</firstname><surname>Ahmad</surname><orcid>0000-0001-8111-9967</orcid><order>3</order></author><author><firstname>Peter E.</firstname><surname>McKenna</surname><order>4</order></author><author><firstname>Xingkun</firstname><surname>Liu</surname><order>5</order></author><author><firstname>Katrin</firstname><surname>Lohan</surname><order>6</order></author><author><firstname>Helen</firstname><surname>Hastie</surname><order>7</order></author></authors><documents><document><filename>64061__28495__e735b4c860a2439a91249628e16ad2eb.pdf</filename><originalFilename>64061.VOR.pdf</originalFilename><uploaded>2023-09-11T11:51:37.7735008</uploaded><type>Output</type><contentLength>29223834</contentLength><contentType>application/pdf</contentType><version>Version of Record</version><cronfaStatus>true</cronfaStatus><documentNotes>© 2023 Robb, Lopes, Ahmad, McKenna, Liu, Lohan and Hastie. Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>https://creativecommons.org/licenses/by/4.0/</licence></document></documents><OutputDurs/></rfc1807>
spelling v2 64061 2023-08-09 Seeing eye to eye: trustworthy embodiment for task-based conversational agents 9c42fd947397b1ad2bfa9107457974d5 0000-0001-8111-9967 Muneeb Ahmad Muneeb Ahmad true false 2023-08-09 SCS Smart speakers and conversational agents have been accepted into our homes for a number of tasks such as playing music, interfacing with the internet of things, and more recently, general chit-chat. However, they have been less readily accepted in our workplaces. This may be due to data privacy and security concerns that exist with commercially available smart speakers. However, one of the reasons for this may be that a smart speaker is simply too abstract and does not portray the social cues associated with a trustworthy work colleague. Here, we present an in-depth mixed method study, in which we investigate this question of embodiment in a serious task-based work scenario of a first responder team. We explore the concepts of trust, engagement, cognitive load, and human performance using a humanoid head style robot, a commercially available smart speaker, and a specially developed dialogue manager. Studying the effect of embodiment on trust, being a highly subjective and multi-faceted phenomena, is clearly challenging, and our results indicate that potentially, the robot, with its anthropomorphic facial features, expressions, and eye gaze, was trusted more than the smart speaker. In addition, we found that embodying a conversational agent helped increase task engagement and performance compared to the smart speaker. This study indicates that embodiment could potentially be useful for transitioning conversational agents into the workplace, and further in situ, “in the wild” experiments with domain workers could be conducted to confirm this. Journal Article Frontiers in Robotics and AI 10 Frontiers Media SA 2296-9144 Conversational agent, remote robots, autonomous systems, human–robot teaming, social robotics, user engagement, cognitive load 0 0 0 0001-01-01 10.3389/frobt.2023.1234767 http://dx.doi.org/10.3389/frobt.2023.1234767 COLLEGE NANME Computer Science COLLEGE CODE SCS Swansea University Funders of this work are the UKRI EPSRC ORCA Hub (EP/R026173/1) and the UKRI EPSRC Trustworthy 732 Autonomous Systems (TAS) Node on Trust (EP/V026682/1). EP/R026173/1, EP/V026682/1 2023-09-11T12:00:10.5252117 2023-08-09T09:07:58.3277031 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science David A. Robb 1 José Lopes 2 Muneeb Ahmad 0000-0001-8111-9967 3 Peter E. McKenna 4 Xingkun Liu 5 Katrin Lohan 6 Helen Hastie 7 64061__28495__e735b4c860a2439a91249628e16ad2eb.pdf 64061.VOR.pdf 2023-09-11T11:51:37.7735008 Output 29223834 application/pdf Version of Record true © 2023 Robb, Lopes, Ahmad, McKenna, Liu, Lohan and Hastie. Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0). true eng https://creativecommons.org/licenses/by/4.0/
title Seeing eye to eye: trustworthy embodiment for task-based conversational agents
spellingShingle Seeing eye to eye: trustworthy embodiment for task-based conversational agents
Muneeb Ahmad
title_short Seeing eye to eye: trustworthy embodiment for task-based conversational agents
title_full Seeing eye to eye: trustworthy embodiment for task-based conversational agents
title_fullStr Seeing eye to eye: trustworthy embodiment for task-based conversational agents
title_full_unstemmed Seeing eye to eye: trustworthy embodiment for task-based conversational agents
title_sort Seeing eye to eye: trustworthy embodiment for task-based conversational agents
author_id_str_mv 9c42fd947397b1ad2bfa9107457974d5
author_id_fullname_str_mv 9c42fd947397b1ad2bfa9107457974d5_***_Muneeb Ahmad
author Muneeb Ahmad
author2 David A. Robb
José Lopes
Muneeb Ahmad
Peter E. McKenna
Xingkun Liu
Katrin Lohan
Helen Hastie
format Journal article
container_title Frontiers in Robotics and AI
container_volume 10
institution Swansea University
issn 2296-9144
doi_str_mv 10.3389/frobt.2023.1234767
publisher Frontiers Media SA
college_str Faculty of Science and Engineering
hierarchytype
hierarchy_top_id facultyofscienceandengineering
hierarchy_top_title Faculty of Science and Engineering
hierarchy_parent_id facultyofscienceandengineering
hierarchy_parent_title Faculty of Science and Engineering
department_str School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science
url http://dx.doi.org/10.3389/frobt.2023.1234767
document_store_str 1
active_str 0
description Smart speakers and conversational agents have been accepted into our homes for a number of tasks such as playing music, interfacing with the internet of things, and more recently, general chit-chat. However, they have been less readily accepted in our workplaces. This may be due to data privacy and security concerns that exist with commercially available smart speakers. However, one of the reasons for this may be that a smart speaker is simply too abstract and does not portray the social cues associated with a trustworthy work colleague. Here, we present an in-depth mixed method study, in which we investigate this question of embodiment in a serious task-based work scenario of a first responder team. We explore the concepts of trust, engagement, cognitive load, and human performance using a humanoid head style robot, a commercially available smart speaker, and a specially developed dialogue manager. Studying the effect of embodiment on trust, being a highly subjective and multi-faceted phenomena, is clearly challenging, and our results indicate that potentially, the robot, with its anthropomorphic facial features, expressions, and eye gaze, was trusted more than the smart speaker. In addition, we found that embodying a conversational agent helped increase task engagement and performance compared to the smart speaker. This study indicates that embodiment could potentially be useful for transitioning conversational agents into the workplace, and further in situ, “in the wild” experiments with domain workers could be conducted to confirm this.
published_date 0001-01-01T12:00:11Z
_version_ 1776738644248231936
score 11.014067