Journal article 340 views 61 downloads
An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence
Journal of Innovation & Knowledge, Volume: 10, Issue: 3, Start page: 100700
Swansea University Author:
Paul Jones
-
PDF | Version of Record
© 2025 The Authors. This is an open access article under the CC BY-NC-ND license.
Download (2.91MB)
DOI (Published version): 10.1016/j.jik.2025.100700
Abstract
Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of p...
| Published in: | Journal of Innovation & Knowledge |
|---|---|
| ISSN: | 2444-569X |
| Published: |
Elsevier BV
2025
|
| Online Access: |
Check full text
|
| URI: | https://cronfa.swan.ac.uk/Record/cronfa69312 |
| first_indexed |
2025-04-18T11:11:35Z |
|---|---|
| last_indexed |
2025-05-22T04:48:34Z |
| id |
cronfa69312 |
| recordtype |
SURis |
| fullrecord |
<?xml version="1.0"?><rfc1807><datestamp>2025-05-21T11:12:13.3585274</datestamp><bib-version>v2</bib-version><id>69312</id><entry>2025-04-18</entry><title>An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence</title><swanseaauthors><author><sid>21e2660aaa102fe36fc981880dd9e082</sid><ORCID>0000-0003-0417-9143</ORCID><firstname>Paul</firstname><surname>Jones</surname><name>Paul Jones</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2025-04-18</date><deptcode>CBAE</deptcode><abstract>Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.</abstract><type>Journal Article</type><journal>Journal of Innovation &amp; Knowledge</journal><volume>10</volume><journalNumber>3</journalNumber><paginationStart>100700</paginationStart><paginationEnd/><publisher>Elsevier BV</publisher><placeOfPublication/><isbnPrint/><isbnElectronic/><issnPrint>2444-569X</issnPrint><issnElectronic/><keywords>Explainable AI; Logistic regression; Random forest; Machine learning; Employee churn</keywords><publishedDay>1</publishedDay><publishedMonth>5</publishedMonth><publishedYear>2025</publishedYear><publishedDate>2025-05-01</publishedDate><doi>10.1016/j.jik.2025.100700</doi><url/><notes/><college>COLLEGE NANME</college><department>Management School</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>CBAE</DepartmentCode><institution>Swansea University</institution><apcterm>Another institution paid the OA fee</apcterm><funders>This work was supported by the Open Access Publishing Fund provided by the Free University of Bozen-Bolzano.</funders><projectreference/><lastEdited>2025-05-21T11:12:13.3585274</lastEdited><Created>2025-04-18T12:09:35.8376613</Created><path><level id="1">Faculty of Humanities and Social Sciences</level><level id="2">School of Management - Business Management</level></path><authors><author><firstname>Meenu</firstname><surname>Chaudhary</surname><orcid>0000-0003-3727-7460</orcid><order>1</order></author><author><firstname>Loveleen</firstname><surname>Gaur</surname><order>2</order></author><author><firstname>Amlan</firstname><surname>Chakrabarti</surname><order>3</order></author><author><firstname>Gurmeet</firstname><surname>Singh</surname><order>4</order></author><author><firstname>Paul</firstname><surname>Jones</surname><orcid>0000-0003-0417-9143</orcid><order>5</order></author><author><firstname>Sascha</firstname><surname>Kraus</surname><orcid>0000-0003-4886-7482</orcid><order>6</order></author></authors><documents><document><filename>69312__34326__31513ba2e12c4faaa21f712a43d75178.pdf</filename><originalFilename>69312.VoR.pdf</originalFilename><uploaded>2025-05-21T10:54:14.9949798</uploaded><type>Output</type><contentLength>3049093</contentLength><contentType>application/pdf</contentType><version>Version of Record</version><cronfaStatus>true</cronfaStatus><documentNotes>© 2025 The Authors. This is an open access article under the CC BY-NC-ND license.</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>http://creativecommons.org/licenses/by-nc-nd/4.0/</licence></document></documents><OutputDurs/></rfc1807> |
| spelling |
2025-05-21T11:12:13.3585274 v2 69312 2025-04-18 An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence 21e2660aaa102fe36fc981880dd9e082 0000-0003-0417-9143 Paul Jones Paul Jones true false 2025-04-18 CBAE Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies. Journal Article Journal of Innovation & Knowledge 10 3 100700 Elsevier BV 2444-569X Explainable AI; Logistic regression; Random forest; Machine learning; Employee churn 1 5 2025 2025-05-01 10.1016/j.jik.2025.100700 COLLEGE NANME Management School COLLEGE CODE CBAE Swansea University Another institution paid the OA fee This work was supported by the Open Access Publishing Fund provided by the Free University of Bozen-Bolzano. 2025-05-21T11:12:13.3585274 2025-04-18T12:09:35.8376613 Faculty of Humanities and Social Sciences School of Management - Business Management Meenu Chaudhary 0000-0003-3727-7460 1 Loveleen Gaur 2 Amlan Chakrabarti 3 Gurmeet Singh 4 Paul Jones 0000-0003-0417-9143 5 Sascha Kraus 0000-0003-4886-7482 6 69312__34326__31513ba2e12c4faaa21f712a43d75178.pdf 69312.VoR.pdf 2025-05-21T10:54:14.9949798 Output 3049093 application/pdf Version of Record true © 2025 The Authors. This is an open access article under the CC BY-NC-ND license. true eng http://creativecommons.org/licenses/by-nc-nd/4.0/ |
| title |
An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence |
| spellingShingle |
An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence Paul Jones |
| title_short |
An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence |
| title_full |
An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence |
| title_fullStr |
An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence |
| title_full_unstemmed |
An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence |
| title_sort |
An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence |
| author_id_str_mv |
21e2660aaa102fe36fc981880dd9e082 |
| author_id_fullname_str_mv |
21e2660aaa102fe36fc981880dd9e082_***_Paul Jones |
| author |
Paul Jones |
| author2 |
Meenu Chaudhary Loveleen Gaur Amlan Chakrabarti Gurmeet Singh Paul Jones Sascha Kraus |
| format |
Journal article |
| container_title |
Journal of Innovation & Knowledge |
| container_volume |
10 |
| container_issue |
3 |
| container_start_page |
100700 |
| publishDate |
2025 |
| institution |
Swansea University |
| issn |
2444-569X |
| doi_str_mv |
10.1016/j.jik.2025.100700 |
| publisher |
Elsevier BV |
| college_str |
Faculty of Humanities and Social Sciences |
| hierarchytype |
|
| hierarchy_top_id |
facultyofhumanitiesandsocialsciences |
| hierarchy_top_title |
Faculty of Humanities and Social Sciences |
| hierarchy_parent_id |
facultyofhumanitiesandsocialsciences |
| hierarchy_parent_title |
Faculty of Humanities and Social Sciences |
| department_str |
School of Management - Business Management{{{_:::_}}}Faculty of Humanities and Social Sciences{{{_:::_}}}School of Management - Business Management |
| document_store_str |
1 |
| active_str |
0 |
| description |
Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies. |
| published_date |
2025-05-01T14:15:28Z |
| _version_ |
1851040442070597632 |
| score |
11.089677 |

