The Future of Education

Edition 16

Accepted Abstracts

The Influence of Biased LLM Responses on Perceived Objectivity an Analysis in the Context of Scientific Work

Vitus Haberzettl, Technical University of Applied Sciences Würzburg-Schweinfurt (Germany)

Marcel Schindler, Technical University of Applied Sciences Würzburg-Schweinfurt (Germany)

Abstract

As generative AI becomes increasingly integral to scientific workflows, it creates risks of self-reinforcing feedback loops where users elicit confirmatory responses [1, 2]. However, the effects on perceived objectivity in scientific contexts and underlying psychological mechanisms remain unclear. Integrating theories of Trust in Automation [3] and the Technology Acceptance Model (TAM) [4], this study tests a path model positing trust and acceptance as mediators between response quality and objectivity. Additionally, the Elaboration Likelihood Model (ELM) [5] informs the examination of prior knowledge as a moderator. A randomized laboratory experiment was conducted with an academic sample (N=68) performing a scientific task under a targeted confirmation incentive (framing). The LLM response type was manipulated (biased vs. neutral-critical). GLM-based path analysis and bootstrapping revealed a significant direct negative effect of biased responses on perceived objectivity. Although the hypothesized global mediation was not confirmed, conditional effects highlighted the role of domain expertise: unlike novices, experts reacted to bias with significant trust loss. Independent of manipulation, trust and acceptance strongly predicted objectivity (halo effect). Findings indicate that while students identify biased responses as less objective, low prior knowledge prevents necessary critical distancing (loss of trust), underscoring the urgency of AI literacy training.
 
Keywords: Generative AI, Perceived Objectivity, Confirmation Bias, Trust in Automation, Higher Education
 
REFERENCES
 
[1] Sun, Y. und Kok, S. 2025. Investigating the Effects of Cognitive Biases in Prompts on Large Language Model Outputs. arXiv (Cornell University).
[2] Li, Y., Wang, Y., und Sun, Y. 2025. Improving Quality or Catering to Users? Understanding Confirmation Bias in Large Language Model Interactions.
[3] Lee, J. D. und See, K. A. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors The Journal of the Human Factors and Ergonomics Society 46, 1, 50–80.
[4] Davis, F. D. 1989. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly 13, 3, 319–340.
[5] Petty, R. E. und Cacioppo, J. T. 1986. The Elaboration Likelihood Model of Persuasion.

Back to the list

REGISTER NOW

Reserved area


Indexed in


Media Partners:

Click BrownWalker Press logo for the International Academic and Industry Conference Event Calendar announcing scientific, academic and industry gatherings, online events, call for papers and journal articles
Pixel - Via Luigi Lanzi 12 - 50134 Firenze (FI) - VAT IT 05118710481
    Copyright © 2026 - All rights reserved

Privacy Policy

Webmaster: Pinzani.it