No Cover Image

Conference Paper/Proceeding/Abstract 301 views 26 downloads

Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances

Peter Daish, Matt Roach Orcid Logo, Alan Dix

Communications in Computer and Information Science, Volume: 1, Pages: 323 - 331

Swansea University Authors: Peter Daish, Matt Roach Orcid Logo, Alan Dix

  • 68367.pdf

    PDF | Accepted Manuscript

    Author accepted manuscript document released under the terms of a Creative Commons CC-BY licence using the Swansea University Research Publications Policy (rights retention).

    Download (408.33KB)

Abstract

From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, wit...

Full description

Published in: Communications in Computer and Information Science
ISBN: 9783031746260 9783031746277
ISSN: 1865-0929 1865-0937
Published: Cham Springer Nature Switzerland 2025
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa68367
first_indexed 2024-11-28T13:47:42Z
last_indexed 2025-01-20T20:34:58Z
id cronfa68367
recordtype SURis
fullrecord <?xml version="1.0"?><rfc1807><datestamp>2025-01-20T14:13:12.1663325</datestamp><bib-version>v2</bib-version><id>68367</id><entry>2024-11-28</entry><title>Enhancing Fairness, Justice and&#xA0;Accuracy of&#xA0;Hybrid Human-AI Decisions by&#xA0;Shifting Epistemological Stances</title><swanseaauthors><author><sid>526bb6b1afc3f8acae8bd6a962b107f8</sid><firstname>Peter</firstname><surname>Daish</surname><name>Peter Daish</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>9722c301d5bbdc96e967cdc629290fec</sid><ORCID>0000-0002-1486-5537</ORCID><firstname>Matt</firstname><surname>Roach</surname><name>Matt Roach</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>e31e47c578b2a6a39949aa7f149f4cf9</sid><firstname>Alan</firstname><surname>Dix</surname><name>Alan Dix</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2024-11-28</date><deptcode>MACS</deptcode><abstract>From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, with the aim of improving the quality of decisions. However, the criteria currently used to assess a system&#x2019;s ability to improve hybrid decisions is driven by a utilitarian desire to optimise accuracy through a phenomenon known as &#x2018;complementary performance&#x2019;. This desire puts the design of hybrid decision-making at odds with critical subjective concepts that affect the perception and acceptance of decisions, such as fairness. Fairness as a subjective notion often has a competitive relationship with accuracy and as such, driving complementary behaviour with a utilitarian belief risks driving unfairness in decisions. It is our position that shifting epistemological stances taken in the research and design of human-AI environments is necessary to incorporate the relationship between fairness and accuracy into the notion of &#x2018;complementary behaviour&#x2019;, in order to observe &#x2018;enhanced&#x2019; hybrid human-AI decisions.</abstract><type>Conference Paper/Proceeding/Abstract</type><journal>Communications in Computer and Information Science</journal><volume>1</volume><journalNumber/><paginationStart>323</paginationStart><paginationEnd>331</paginationEnd><publisher>Springer Nature Switzerland</publisher><placeOfPublication>Cham</placeOfPublication><isbnPrint>9783031746260</isbnPrint><isbnElectronic>9783031746277</isbnElectronic><issnPrint>1865-0929</issnPrint><issnElectronic>1865-0937</issnElectronic><keywords/><publishedDay>1</publishedDay><publishedMonth>1</publishedMonth><publishedYear>2025</publishedYear><publishedDate>2025-01-01</publishedDate><doi>10.1007/978-3-031-74627-7_25</doi><url/><notes/><college>COLLEGE NANME</college><department>Mathematics and Computer Science School</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>MACS</DepartmentCode><institution>Swansea University</institution><apcterm/><funders/><projectreference/><lastEdited>2025-01-20T14:13:12.1663325</lastEdited><Created>2024-11-28T11:38:30.2619395</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Peter</firstname><surname>Daish</surname><order>1</order></author><author><firstname>Matt</firstname><surname>Roach</surname><orcid>0000-0002-1486-5537</orcid><order>2</order></author><author><firstname>Alan</firstname><surname>Dix</surname><order>3</order></author></authors><documents><document><filename>68367__32988__3f75b94c8c4e44b290ef9876988fbc02.pdf</filename><originalFilename>68367.pdf</originalFilename><uploaded>2024-11-28T11:45:51.4039536</uploaded><type>Output</type><contentLength>418127</contentLength><contentType>application/pdf</contentType><version>Accepted Manuscript</version><cronfaStatus>true</cronfaStatus><documentNotes>Author accepted manuscript document released under the terms of a Creative Commons CC-BY licence using the Swansea University Research Publications Policy (rights retention).</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>https://creativecommons.org/licenses/by/4.0/deed.en</licence></document></documents><OutputDurs/></rfc1807>
spelling 2025-01-20T14:13:12.1663325 v2 68367 2024-11-28 Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances 526bb6b1afc3f8acae8bd6a962b107f8 Peter Daish Peter Daish true false 9722c301d5bbdc96e967cdc629290fec 0000-0002-1486-5537 Matt Roach Matt Roach true false e31e47c578b2a6a39949aa7f149f4cf9 Alan Dix Alan Dix true false 2024-11-28 MACS From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, with the aim of improving the quality of decisions. However, the criteria currently used to assess a system’s ability to improve hybrid decisions is driven by a utilitarian desire to optimise accuracy through a phenomenon known as ‘complementary performance’. This desire puts the design of hybrid decision-making at odds with critical subjective concepts that affect the perception and acceptance of decisions, such as fairness. Fairness as a subjective notion often has a competitive relationship with accuracy and as such, driving complementary behaviour with a utilitarian belief risks driving unfairness in decisions. It is our position that shifting epistemological stances taken in the research and design of human-AI environments is necessary to incorporate the relationship between fairness and accuracy into the notion of ‘complementary behaviour’, in order to observe ‘enhanced’ hybrid human-AI decisions. Conference Paper/Proceeding/Abstract Communications in Computer and Information Science 1 323 331 Springer Nature Switzerland Cham 9783031746260 9783031746277 1865-0929 1865-0937 1 1 2025 2025-01-01 10.1007/978-3-031-74627-7_25 COLLEGE NANME Mathematics and Computer Science School COLLEGE CODE MACS Swansea University 2025-01-20T14:13:12.1663325 2024-11-28T11:38:30.2619395 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Peter Daish 1 Matt Roach 0000-0002-1486-5537 2 Alan Dix 3 68367__32988__3f75b94c8c4e44b290ef9876988fbc02.pdf 68367.pdf 2024-11-28T11:45:51.4039536 Output 418127 application/pdf Accepted Manuscript true Author accepted manuscript document released under the terms of a Creative Commons CC-BY licence using the Swansea University Research Publications Policy (rights retention). true eng https://creativecommons.org/licenses/by/4.0/deed.en
title Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
spellingShingle Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
Peter Daish
Matt Roach
Alan Dix
title_short Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
title_full Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
title_fullStr Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
title_full_unstemmed Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
title_sort Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
author_id_str_mv 526bb6b1afc3f8acae8bd6a962b107f8
9722c301d5bbdc96e967cdc629290fec
e31e47c578b2a6a39949aa7f149f4cf9
author_id_fullname_str_mv 526bb6b1afc3f8acae8bd6a962b107f8_***_Peter Daish
9722c301d5bbdc96e967cdc629290fec_***_Matt Roach
e31e47c578b2a6a39949aa7f149f4cf9_***_Alan Dix
author Peter Daish
Matt Roach
Alan Dix
author2 Peter Daish
Matt Roach
Alan Dix
format Conference Paper/Proceeding/Abstract
container_title Communications in Computer and Information Science
container_volume 1
container_start_page 323
publishDate 2025
institution Swansea University
isbn 9783031746260
9783031746277
issn 1865-0929
1865-0937
doi_str_mv 10.1007/978-3-031-74627-7_25
publisher Springer Nature Switzerland
college_str Faculty of Science and Engineering
hierarchytype
hierarchy_top_id facultyofscienceandengineering
hierarchy_top_title Faculty of Science and Engineering
hierarchy_parent_id facultyofscienceandengineering
hierarchy_parent_title Faculty of Science and Engineering
department_str School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science
document_store_str 1
active_str 0
description From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, with the aim of improving the quality of decisions. However, the criteria currently used to assess a system’s ability to improve hybrid decisions is driven by a utilitarian desire to optimise accuracy through a phenomenon known as ‘complementary performance’. This desire puts the design of hybrid decision-making at odds with critical subjective concepts that affect the perception and acceptance of decisions, such as fairness. Fairness as a subjective notion often has a competitive relationship with accuracy and as such, driving complementary behaviour with a utilitarian belief risks driving unfairness in decisions. It is our position that shifting epistemological stances taken in the research and design of human-AI environments is necessary to incorporate the relationship between fairness and accuracy into the notion of ‘complementary behaviour’, in order to observe ‘enhanced’ hybrid human-AI decisions.
published_date 2025-01-01T08:39:08Z
_version_ 1830359399373209600
score 11.059486