Conference Paper/Proceeding/Abstract 112 views 9 downloads
Maintaining Coherence in Explainable AI: Strategies for Consistency Across Time and Interaction
SYNERGY 2025 – Designing and Building Hybrid Human–AI Systems, Volume: 4074
Swansea University Authors:
Alan Dix, Ben Wilson , Matt Roach
-
PDF | Version of Record
© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Download (1.47MB)
Abstract
Can we create explanations of artificial intelligence and machine learning that have some level of consistency over time as we might expect of a human explanation? This paper explores this issue, and offers several strategies for either maintaining a level of consistency or highlighting when and why...
| Published in: | SYNERGY 2025 – Designing and Building Hybrid Human–AI Systems |
|---|---|
| ISSN: | 1613-0073 |
| Published: |
CEUR-WS.org
2025
|
| Online Access: |
Check full text
|
| URI: | https://cronfa.swan.ac.uk/Record/cronfa71398 |
| first_indexed |
2026-02-10T13:11:25Z |
|---|---|
| last_indexed |
2026-03-14T05:33:56Z |
| id |
cronfa71398 |
| recordtype |
SURis |
| fullrecord |
<?xml version="1.0"?><rfc1807><datestamp>2026-03-13T13:34:16.3779060</datestamp><bib-version>v2</bib-version><id>71398</id><entry>2026-02-10</entry><title>Maintaining Coherence in Explainable AI: Strategies for Consistency Across Time and Interaction</title><swanseaauthors><author><sid>e31e47c578b2a6a39949aa7f149f4cf9</sid><firstname>Alan</firstname><surname>Dix</surname><name>Alan Dix</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>a854728f3952ca0b74a49f9286a9b0e2</sid><ORCID>0009-0004-5663-5854</ORCID><firstname>Ben</firstname><surname>Wilson</surname><name>Ben Wilson</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>9722c301d5bbdc96e967cdc629290fec</sid><ORCID>0000-0002-1486-5537</ORCID><firstname>Matt</firstname><surname>Roach</surname><name>Matt Roach</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2026-02-10</date><abstract>Can we create explanations of artificial intelligence and machine learning that have some level of consistency over time as we might expect of a human explanation? This paper explores this issue, and offers several strategies for either maintaining a level of consistency or highlighting when and why past explanations might appear inconsistent with current decisions.</abstract><type>Conference Paper/Proceeding/Abstract</type><journal>SYNERGY 2025 – Designing and Building Hybrid Human–AI Systems</journal><volume>4074</volume><journalNumber/><paginationStart/><paginationEnd/><publisher>CEUR-WS.org</publisher><placeOfPublication/><isbnPrint/><isbnElectronic/><issnPrint/><issnElectronic>1613-0073</issnElectronic><keywords>human-AI interaction, explainable AI, synergistic human-AI systems, user interface, artificial intelligence, design, adaptive interfaces, user experience</keywords><publishedDay>16</publishedDay><publishedMonth>6</publishedMonth><publishedYear>2025</publishedYear><publishedDate>2025-06-16</publishedDate><doi/><url>https://ceur-ws.org/Vol-4074/</url><notes/><college>COLLEGE NANME</college><CollegeCode>COLLEGE CODE</CollegeCode><institution>Swansea University</institution><apcterm>Not Required</apcterm><funders>Tango Project (EU Grant Agreement no. 101120763 - TANGO)</funders><projectreference/><lastEdited>2026-03-13T13:34:16.3779060</lastEdited><Created>2026-02-10T12:56:53.9026773</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Alan</firstname><surname>Dix</surname><order>1</order></author><author><firstname>Tommaso</firstname><surname>Turchi</surname><orcid>0000-0001-6826-9688</orcid><order>2</order></author><author><firstname>Ben</firstname><surname>Wilson</surname><orcid>0009-0004-5663-5854</orcid><order>3</order></author><author><firstname>Alessio</firstname><surname>Malizia</surname><orcid>0000-0002-2601-7009</orcid><order>4</order></author><author><firstname>Anna</firstname><surname>Monreale</surname><orcid>0000-0001-8541-0284</orcid><order>5</order></author><author><firstname>Matt</firstname><surname>Roach</surname><orcid>0000-0002-1486-5537</orcid><order>6</order></author></authors><documents><document><filename>71398__36218__1ef53d7e0dce4f25841ba519014a458b.pdf</filename><originalFilename>Synergy 2025 Coherence.pdf</originalFilename><uploaded>2026-02-10T13:11:06.0300428</uploaded><type>Output</type><contentLength>1540928</contentLength><contentType>application/pdf</contentType><version>Version of Record</version><cronfaStatus>true</cronfaStatus><documentNotes>© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>https://creativecommons.org/licenses/by/4.0/deed.en</licence></document></documents><OutputDurs/></rfc1807> |
| spelling |
2026-03-13T13:34:16.3779060 v2 71398 2026-02-10 Maintaining Coherence in Explainable AI: Strategies for Consistency Across Time and Interaction e31e47c578b2a6a39949aa7f149f4cf9 Alan Dix Alan Dix true false a854728f3952ca0b74a49f9286a9b0e2 0009-0004-5663-5854 Ben Wilson Ben Wilson true false 9722c301d5bbdc96e967cdc629290fec 0000-0002-1486-5537 Matt Roach Matt Roach true false 2026-02-10 Can we create explanations of artificial intelligence and machine learning that have some level of consistency over time as we might expect of a human explanation? This paper explores this issue, and offers several strategies for either maintaining a level of consistency or highlighting when and why past explanations might appear inconsistent with current decisions. Conference Paper/Proceeding/Abstract SYNERGY 2025 – Designing and Building Hybrid Human–AI Systems 4074 CEUR-WS.org 1613-0073 human-AI interaction, explainable AI, synergistic human-AI systems, user interface, artificial intelligence, design, adaptive interfaces, user experience 16 6 2025 2025-06-16 https://ceur-ws.org/Vol-4074/ COLLEGE NANME COLLEGE CODE Swansea University Not Required Tango Project (EU Grant Agreement no. 101120763 - TANGO) 2026-03-13T13:34:16.3779060 2026-02-10T12:56:53.9026773 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Alan Dix 1 Tommaso Turchi 0000-0001-6826-9688 2 Ben Wilson 0009-0004-5663-5854 3 Alessio Malizia 0000-0002-2601-7009 4 Anna Monreale 0000-0001-8541-0284 5 Matt Roach 0000-0002-1486-5537 6 71398__36218__1ef53d7e0dce4f25841ba519014a458b.pdf Synergy 2025 Coherence.pdf 2026-02-10T13:11:06.0300428 Output 1540928 application/pdf Version of Record true © 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). true eng https://creativecommons.org/licenses/by/4.0/deed.en |
| title |
Maintaining Coherence in Explainable AI: Strategies for Consistency Across Time and Interaction |
| spellingShingle |
Maintaining Coherence in Explainable AI: Strategies for Consistency Across Time and Interaction Alan Dix Ben Wilson Matt Roach |
| title_short |
Maintaining Coherence in Explainable AI: Strategies for Consistency Across Time and Interaction |
| title_full |
Maintaining Coherence in Explainable AI: Strategies for Consistency Across Time and Interaction |
| title_fullStr |
Maintaining Coherence in Explainable AI: Strategies for Consistency Across Time and Interaction |
| title_full_unstemmed |
Maintaining Coherence in Explainable AI: Strategies for Consistency Across Time and Interaction |
| title_sort |
Maintaining Coherence in Explainable AI: Strategies for Consistency Across Time and Interaction |
| author_id_str_mv |
e31e47c578b2a6a39949aa7f149f4cf9 a854728f3952ca0b74a49f9286a9b0e2 9722c301d5bbdc96e967cdc629290fec |
| author_id_fullname_str_mv |
e31e47c578b2a6a39949aa7f149f4cf9_***_Alan Dix a854728f3952ca0b74a49f9286a9b0e2_***_Ben Wilson 9722c301d5bbdc96e967cdc629290fec_***_Matt Roach |
| author |
Alan Dix Ben Wilson Matt Roach |
| author2 |
Alan Dix Tommaso Turchi Ben Wilson Alessio Malizia Anna Monreale Matt Roach |
| format |
Conference Paper/Proceeding/Abstract |
| container_title |
SYNERGY 2025 – Designing and Building Hybrid Human–AI Systems |
| container_volume |
4074 |
| publishDate |
2025 |
| institution |
Swansea University |
| issn |
1613-0073 |
| publisher |
CEUR-WS.org |
| college_str |
Faculty of Science and Engineering |
| hierarchytype |
|
| hierarchy_top_id |
facultyofscienceandengineering |
| hierarchy_top_title |
Faculty of Science and Engineering |
| hierarchy_parent_id |
facultyofscienceandengineering |
| hierarchy_parent_title |
Faculty of Science and Engineering |
| department_str |
School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science |
| url |
https://ceur-ws.org/Vol-4074/ |
| document_store_str |
1 |
| active_str |
0 |
| description |
Can we create explanations of artificial intelligence and machine learning that have some level of consistency over time as we might expect of a human explanation? This paper explores this issue, and offers several strategies for either maintaining a level of consistency or highlighting when and why past explanations might appear inconsistent with current decisions. |
| published_date |
2025-06-16T05:33:59Z |
| _version_ |
1860792105673162752 |
| score |
11.100225 |

