The Future of Education

Edition 16

Accepted Abstracts

The Impact of Evaluation Source on Trust: The Mediating Role of Perceived Fairness

Marcel Schindler, Technical University of Applied Sciences Würzburg-Schweinfurt (Germany)

Vitus Haberzettl, Technical University of Applied Sciences Würzburg-Schweinfurt (Germany)

Abstract

The rapid integration of Artificial Intelligence (AI) into Human Resource Management has fundamentally transformed organizational decision-making, particularly in evaluative processes such as performance appraisals and candidate screening [1]. While AI systems promise enhanced objectivity and efficiency, their implementation often encounters significant skepticism, a phenomenon known as algorithm aversion [2]. This research investigates the underlying psychological mechanisms of this resistance by examining the impact of the evaluation source—human versus AI—on trust [3], identifying perceived fairness [4] as a critical mediator in this relationship. Utilizing a quantitative experimental approach with a sample of $N=62$, a mediation analysis was conducted using the JAMM module in jamovi, following established regression-based frameworks [5]. The empirical results demonstrate a significant trust gap, with human evaluators receiving substantially higher trust ratings than AI systems. Most importantly, the analysis reveals a full mediation effect: the lower levels of trust attributed to AI evaluations are entirely explained by a decrease in perceived fairness. Once fairness is accounted for, the direct influence of the evaluation source on trust becomes non-significant. These findings suggest that algorithm aversion in HRM is primarily driven by concerns regarding procedural justice rather than a rejection of technology itself. Consequently, organizations must prioritize transparency and explainability [6] to foster employee acceptance. This study contributes to the literature on AI trust by identifying fairness as the central lever for overcoming resistance in automated HRM contexts.
 
Keywords: Artificial Intelligence, Human Resource Management, Trust in Automation, Algorithm Aversion, Perceived Fairness, Mediation Analysis
 
REFERENCES
 
[1] Tambe, P., Cappelli, P., and Yakubovich, V. 2019. Artificial intelligence in human resources management: Challenges and a path forward. California Management Review 61, 4, 15–42.
[2] Dietvorst, B. J., Simmons, J. P., and Massey, C. 2015. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144, 1, 114.
[3] Lee, J. D. and See, K. A. 2004. Trust in automation: Designing for appropriate reliance. Human Factors 46, 1, 50–80.
[4] Colquitt, J. A. 2001. On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology 86, 3, 386.
[5] Hayes, A. F. 2017. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Publications.
[6] Shin, D. 2021. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies 146, 102551

Back to the list

REGISTER NOW

Reserved area


Indexed in


Media Partners:

Click BrownWalker Press logo for the International Academic and Industry Conference Event Calendar announcing scientific, academic and industry gatherings, online events, call for papers and journal articles
Pixel - Via Luigi Lanzi 12 - 50134 Firenze (FI) - VAT IT 05118710481
    Copyright © 2026 - All rights reserved

Privacy Policy

Webmaster: Pinzani.it