No Cover Image

Journal article 339 views 60 downloads

An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence

Meenu Chaudhary Orcid Logo, Loveleen Gaur, Amlan Chakrabarti, Gurmeet Singh, Paul Jones Orcid Logo, Sascha Kraus Orcid Logo

Journal of Innovation & Knowledge, Volume: 10, Issue: 3, Start page: 100700

Swansea University Author: Paul Jones Orcid Logo

  • 69312.VoR.pdf

    PDF | Version of Record

    © 2025 The Authors. This is an open access article under the CC BY-NC-ND license.

    Download (2.91MB)

Abstract

Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of p...

Full description

Published in: Journal of Innovation & Knowledge
ISSN: 2444-569X
Published: Elsevier BV 2025
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa69312
Abstract: Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.
Keywords: Explainable AI; Logistic regression; Random forest; Machine learning; Employee churn
College: Faculty of Humanities and Social Sciences
Funders: This work was supported by the Open Access Publishing Fund provided by the Free University of Bozen-Bolzano.
Issue: 3
Start Page: 100700