No Cover Image

Book chapter 81 views

Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances

Peter Daish, Matt Roach Orcid Logo, Alan Dix

Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Volume: 1

Swansea University Authors: Peter Daish, Matt Roach Orcid Logo, Alan Dix

Full text not available from this repository: check for access using links below.

Abstract

From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, wit...

Full description

Published in: Machine Learning and Principles and Practice of Knowledge Discovery in Databases
ISBN: 978-3-031-74639-0 978-3-031-74640-6
ISSN: 1865-0929 1865-0937
Published: Springer Cham 2025
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa68367
first_indexed 2024-11-28T13:47:42Z
last_indexed 2024-12-09T19:47:15Z
id cronfa68367
recordtype SURis
fullrecord <?xml version="1.0"?><rfc1807><datestamp>2024-12-09T13:08:56.0415533</datestamp><bib-version>v2</bib-version><id>68367</id><entry>2024-11-28</entry><title>Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances</title><swanseaauthors><author><sid>526bb6b1afc3f8acae8bd6a962b107f8</sid><firstname>Peter</firstname><surname>Daish</surname><name>Peter Daish</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>9722c301d5bbdc96e967cdc629290fec</sid><ORCID>0000-0002-1486-5537</ORCID><firstname>Matt</firstname><surname>Roach</surname><name>Matt Roach</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>e31e47c578b2a6a39949aa7f149f4cf9</sid><firstname>Alan</firstname><surname>Dix</surname><name>Alan Dix</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2024-11-28</date><abstract>From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, with the aim of improving the quality of decisions. However, the criteria currently used to assess a system&#x2019;s ability to improve hybrid decisions is driven by a utilitarian desire to optimise accuracy through a phenomenon known as &#x2018;complementary performance&#x2019;. This desire puts the design of hybrid decision-making at odds with critical subjective concepts that affect the perception and acceptance of decisions, such as fairness. Fairness as a subjective notion often has a competitive relationship with accuracy and as such, driving complementary behaviour with a utilitarian belief risks driving unfairness in decisions. It is our position that shifting epistemological stances taken in the research and design of human-AI environments is necessary to incorporate the relationship between fairness and accuracy into the notion of &#x2018;complementary behaviour&#x2019;, in order to observe &#x2018;enhanced&#x2019; hybrid human-AI decisions.</abstract><type>Book chapter</type><journal>Machine Learning and Principles and Practice of Knowledge Discovery in Databases</journal><volume>1</volume><journalNumber/><paginationStart/><paginationEnd/><publisher>Springer Cham</publisher><placeOfPublication/><isbnPrint>978-3-031-74639-0</isbnPrint><isbnElectronic>978-3-031-74640-6</isbnElectronic><issnPrint>1865-0929</issnPrint><issnElectronic>1865-0937</issnElectronic><keywords/><publishedDay>27</publishedDay><publishedMonth>1</publishedMonth><publishedYear>2025</publishedYear><publishedDate>2025-01-27</publishedDate><doi>10.1007/978-3-031-74627-7_25</doi><url/><notes/><college>COLLEGE NANME</college><CollegeCode>COLLEGE CODE</CollegeCode><institution>Swansea University</institution><apcterm/><funders/><projectreference/><lastEdited>2024-12-09T13:08:56.0415533</lastEdited><Created>2024-11-28T11:38:30.2619395</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Peter</firstname><surname>Daish</surname><order>1</order></author><author><firstname>Matt</firstname><surname>Roach</surname><orcid>0000-0002-1486-5537</orcid><order>2</order></author><author><firstname>Alan</firstname><surname>Dix</surname><order>3</order></author></authors><documents/><OutputDurs/></rfc1807>
spelling 2024-12-09T13:08:56.0415533 v2 68367 2024-11-28 Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances 526bb6b1afc3f8acae8bd6a962b107f8 Peter Daish Peter Daish true false 9722c301d5bbdc96e967cdc629290fec 0000-0002-1486-5537 Matt Roach Matt Roach true false e31e47c578b2a6a39949aa7f149f4cf9 Alan Dix Alan Dix true false 2024-11-28 From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, with the aim of improving the quality of decisions. However, the criteria currently used to assess a system’s ability to improve hybrid decisions is driven by a utilitarian desire to optimise accuracy through a phenomenon known as ‘complementary performance’. This desire puts the design of hybrid decision-making at odds with critical subjective concepts that affect the perception and acceptance of decisions, such as fairness. Fairness as a subjective notion often has a competitive relationship with accuracy and as such, driving complementary behaviour with a utilitarian belief risks driving unfairness in decisions. It is our position that shifting epistemological stances taken in the research and design of human-AI environments is necessary to incorporate the relationship between fairness and accuracy into the notion of ‘complementary behaviour’, in order to observe ‘enhanced’ hybrid human-AI decisions. Book chapter Machine Learning and Principles and Practice of Knowledge Discovery in Databases 1 Springer Cham 978-3-031-74639-0 978-3-031-74640-6 1865-0929 1865-0937 27 1 2025 2025-01-27 10.1007/978-3-031-74627-7_25 COLLEGE NANME COLLEGE CODE Swansea University 2024-12-09T13:08:56.0415533 2024-11-28T11:38:30.2619395 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Peter Daish 1 Matt Roach 0000-0002-1486-5537 2 Alan Dix 3
title Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
spellingShingle Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
Peter Daish
Matt Roach
Alan Dix
title_short Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
title_full Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
title_fullStr Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
title_full_unstemmed Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
title_sort Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
author_id_str_mv 526bb6b1afc3f8acae8bd6a962b107f8
9722c301d5bbdc96e967cdc629290fec
e31e47c578b2a6a39949aa7f149f4cf9
author_id_fullname_str_mv 526bb6b1afc3f8acae8bd6a962b107f8_***_Peter Daish
9722c301d5bbdc96e967cdc629290fec_***_Matt Roach
e31e47c578b2a6a39949aa7f149f4cf9_***_Alan Dix
author Peter Daish
Matt Roach
Alan Dix
author2 Peter Daish
Matt Roach
Alan Dix
format Book chapter
container_title Machine Learning and Principles and Practice of Knowledge Discovery in Databases
container_volume 1
publishDate 2025
institution Swansea University
isbn 978-3-031-74639-0
978-3-031-74640-6
issn 1865-0929
1865-0937
doi_str_mv 10.1007/978-3-031-74627-7_25
publisher Springer Cham
college_str Faculty of Science and Engineering
hierarchytype
hierarchy_top_id facultyofscienceandengineering
hierarchy_top_title Faculty of Science and Engineering
hierarchy_parent_id facultyofscienceandengineering
hierarchy_parent_title Faculty of Science and Engineering
department_str School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science
document_store_str 0
active_str 0
description From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, with the aim of improving the quality of decisions. However, the criteria currently used to assess a system’s ability to improve hybrid decisions is driven by a utilitarian desire to optimise accuracy through a phenomenon known as ‘complementary performance’. This desire puts the design of hybrid decision-making at odds with critical subjective concepts that affect the perception and acceptance of decisions, such as fairness. Fairness as a subjective notion often has a competitive relationship with accuracy and as such, driving complementary behaviour with a utilitarian belief risks driving unfairness in decisions. It is our position that shifting epistemological stances taken in the research and design of human-AI environments is necessary to incorporate the relationship between fairness and accuracy into the notion of ‘complementary behaviour’, in order to observe ‘enhanced’ hybrid human-AI decisions.
published_date 2025-01-27T08:36:40Z
_version_ 1821393919182635008
score 11.04748