Journal article 474 views 79 downloads
Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability
Medical Law Review, Volume: 31, Issue: 4, Pages: 501 - 520
Swansea University Author: Caroline Jones
-
PDF | Version of Record
Distributed under the terms of a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
Download (313.2KB)
DOI (Published version): 10.1093/medlaw/fwad013
Abstract
Artificial intelligence (AI) could transform healthcare provision, possibly improving patient safety and clinician decision-making, and mitigating the effects of staff shortages. However, there are concerns - voiced by regulators and policy-makers - over whether AI and clinical decision support syst...
Published in: | Medical Law Review |
---|---|
ISSN: | 0967-0742 1464-3790 |
Published: |
Oxford, UK
Oxford University Press (OUP)
2023
|
Online Access: |
Check full text
|
URI: | https://cronfa.swan.ac.uk/Record/cronfa63456 |
first_indexed |
2023-05-15T14:49:52Z |
---|---|
last_indexed |
2024-11-15T18:01:37Z |
id |
cronfa63456 |
recordtype |
SURis |
fullrecord |
<?xml version="1.0"?><rfc1807><datestamp>2024-06-05T16:19:51.9048711</datestamp><bib-version>v2</bib-version><id>63456</id><entry>2023-05-15</entry><title>Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability</title><swanseaauthors><author><sid>8201817d55a832f7c23f406402904a2b</sid><ORCID>0000-0001-7632-9468</ORCID><firstname>Caroline</firstname><surname>Jones</surname><name>Caroline Jones</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2023-05-15</date><deptcode>HRCL</deptcode><abstract>Artificial intelligence (AI) could transform healthcare provision, possibly improving patient safety and clinician decision-making, and mitigating the effects of staff shortages. However, there are concerns - voiced by regulators and policy-makers - over whether AI and clinical decision support systems (CDSSs) are trusted by relevant stakeholders, and more importantly whether such tools are worthy of trust. Yet, the meaning ascribed to trust and trustworthiness is often implicit, and it may be unclear what or who is being trusted. We address these issues, focusing for the most part on the perspective(s) of clinicians. Empirical studies suggest clinicians’ concerns about the use of AI/CDSSs include the accuracy of advice given and potential legal liability if a patient is harmed. Onora O’Neill’s conceptualisation of trust and trustworthiness provides the framework for our analysis. Through unpacking and reflecting upon these two concepts we gain greater clarity over the meaning given to them by a range of stakeholders; minimise the extent to/ways in which stakeholders are talking at cross purposes; and maintain the value of trust and trustworthiness as useful concepts in debates around the use of AI and CDSSs.</abstract><type>Journal Article</type><journal>Medical Law Review</journal><volume>31</volume><journalNumber>4</journalNumber><paginationStart>501</paginationStart><paginationEnd>520</paginationEnd><publisher>Oxford University Press (OUP)</publisher><placeOfPublication>Oxford, UK</placeOfPublication><isbnPrint/><isbnElectronic/><issnPrint>0967-0742</issnPrint><issnElectronic>1464-3790</issnElectronic><keywords>Artificial intelligence, Clinical decision support, Clinicians’ perspectives, Liability, Trust, Trustworthiness</keywords><publishedDay>27</publishedDay><publishedMonth>11</publishedMonth><publishedYear>2023</publishedYear><publishedDate>2023-11-27</publishedDate><doi>10.1093/medlaw/fwad013</doi><url/><notes/><college>COLLEGE NANME</college><department>Hillary Rodham Clinton Law School</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>HRCL</DepartmentCode><institution>Swansea University</institution><apcterm>SU Library paid the OA fee (TA Institutional Deal)</apcterm><funders>Swansea University</funders><projectreference/><lastEdited>2024-06-05T16:19:51.9048711</lastEdited><Created>2023-05-15T15:14:42.6295952</Created><path><level id="1">Faculty of Humanities and Social Sciences</level><level id="2">Hilary Rodham Clinton School of Law</level></path><authors><author><firstname>Caroline</firstname><surname>Jones</surname><orcid>0000-0001-7632-9468</orcid><order>1</order></author><author><firstname>James</firstname><surname>Thornton</surname><order>2</order></author><author><firstname>Jeremy C</firstname><surname>Wyatt</surname><order>3</order></author></authors><documents><document><filename>63456__27604__98bbc46a9ae04b569e12f1d11bd171fa.pdf</filename><originalFilename>63456.pdf</originalFilename><uploaded>2023-05-24T16:08:41.6638949</uploaded><type>Output</type><contentLength>320713</contentLength><contentType>application/pdf</contentType><version>Version of Record</version><cronfaStatus>true</cronfaStatus><documentNotes>Distributed under the terms of a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>https://creativecommons.org/licenses/by-nc-nd/4.0/</licence></document></documents><OutputDurs/></rfc1807> |
spelling |
2024-06-05T16:19:51.9048711 v2 63456 2023-05-15 Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability 8201817d55a832f7c23f406402904a2b 0000-0001-7632-9468 Caroline Jones Caroline Jones true false 2023-05-15 HRCL Artificial intelligence (AI) could transform healthcare provision, possibly improving patient safety and clinician decision-making, and mitigating the effects of staff shortages. However, there are concerns - voiced by regulators and policy-makers - over whether AI and clinical decision support systems (CDSSs) are trusted by relevant stakeholders, and more importantly whether such tools are worthy of trust. Yet, the meaning ascribed to trust and trustworthiness is often implicit, and it may be unclear what or who is being trusted. We address these issues, focusing for the most part on the perspective(s) of clinicians. Empirical studies suggest clinicians’ concerns about the use of AI/CDSSs include the accuracy of advice given and potential legal liability if a patient is harmed. Onora O’Neill’s conceptualisation of trust and trustworthiness provides the framework for our analysis. Through unpacking and reflecting upon these two concepts we gain greater clarity over the meaning given to them by a range of stakeholders; minimise the extent to/ways in which stakeholders are talking at cross purposes; and maintain the value of trust and trustworthiness as useful concepts in debates around the use of AI and CDSSs. Journal Article Medical Law Review 31 4 501 520 Oxford University Press (OUP) Oxford, UK 0967-0742 1464-3790 Artificial intelligence, Clinical decision support, Clinicians’ perspectives, Liability, Trust, Trustworthiness 27 11 2023 2023-11-27 10.1093/medlaw/fwad013 COLLEGE NANME Hillary Rodham Clinton Law School COLLEGE CODE HRCL Swansea University SU Library paid the OA fee (TA Institutional Deal) Swansea University 2024-06-05T16:19:51.9048711 2023-05-15T15:14:42.6295952 Faculty of Humanities and Social Sciences Hilary Rodham Clinton School of Law Caroline Jones 0000-0001-7632-9468 1 James Thornton 2 Jeremy C Wyatt 3 63456__27604__98bbc46a9ae04b569e12f1d11bd171fa.pdf 63456.pdf 2023-05-24T16:08:41.6638949 Output 320713 application/pdf Version of Record true Distributed under the terms of a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). true eng https://creativecommons.org/licenses/by-nc-nd/4.0/ |
title |
Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability |
spellingShingle |
Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability Caroline Jones |
title_short |
Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability |
title_full |
Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability |
title_fullStr |
Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability |
title_full_unstemmed |
Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability |
title_sort |
Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability |
author_id_str_mv |
8201817d55a832f7c23f406402904a2b |
author_id_fullname_str_mv |
8201817d55a832f7c23f406402904a2b_***_Caroline Jones |
author |
Caroline Jones |
author2 |
Caroline Jones James Thornton Jeremy C Wyatt |
format |
Journal article |
container_title |
Medical Law Review |
container_volume |
31 |
container_issue |
4 |
container_start_page |
501 |
publishDate |
2023 |
institution |
Swansea University |
issn |
0967-0742 1464-3790 |
doi_str_mv |
10.1093/medlaw/fwad013 |
publisher |
Oxford University Press (OUP) |
college_str |
Faculty of Humanities and Social Sciences |
hierarchytype |
|
hierarchy_top_id |
facultyofhumanitiesandsocialsciences |
hierarchy_top_title |
Faculty of Humanities and Social Sciences |
hierarchy_parent_id |
facultyofhumanitiesandsocialsciences |
hierarchy_parent_title |
Faculty of Humanities and Social Sciences |
department_str |
Hilary Rodham Clinton School of Law{{{_:::_}}}Faculty of Humanities and Social Sciences{{{_:::_}}}Hilary Rodham Clinton School of Law |
document_store_str |
1 |
active_str |
0 |
description |
Artificial intelligence (AI) could transform healthcare provision, possibly improving patient safety and clinician decision-making, and mitigating the effects of staff shortages. However, there are concerns - voiced by regulators and policy-makers - over whether AI and clinical decision support systems (CDSSs) are trusted by relevant stakeholders, and more importantly whether such tools are worthy of trust. Yet, the meaning ascribed to trust and trustworthiness is often implicit, and it may be unclear what or who is being trusted. We address these issues, focusing for the most part on the perspective(s) of clinicians. Empirical studies suggest clinicians’ concerns about the use of AI/CDSSs include the accuracy of advice given and potential legal liability if a patient is harmed. Onora O’Neill’s conceptualisation of trust and trustworthiness provides the framework for our analysis. Through unpacking and reflecting upon these two concepts we gain greater clarity over the meaning given to them by a range of stakeholders; minimise the extent to/ways in which stakeholders are talking at cross purposes; and maintain the value of trust and trustworthiness as useful concepts in debates around the use of AI and CDSSs. |
published_date |
2023-11-27T08:21:41Z |
_version_ |
1821392976520151040 |
score |
11.212735 |