AI-Powered Chatbot Assessments in Online Anatomy & Physiology: A Mixed-Methods Study
Jaime Cantú, Austin Community College (United States)
Abstract
As online health sciences education expands, concerns continue regarding whether traditional exam-based assessments adequately capture applied understanding, communication skills, and clinical reasoning. This mixed-methods study examines the use of AI-powered chatbot assessments as an alternative to traditional high-stakes exams in a fully online Anatomy and Physiology I course at a higher education institution. Two course sections with identical learning objectives and instructional design differed only in assessment format: one used traditional proctored exams, while the other implemented structured, proctored AI chatbot conversations simulating virtual patient interactions. Quantitative survey data collected at multiple time points examined student confidence, perceived learning, fairness, and assessment-related anxiety, while qualitative open-ended responses explored student experiences with explanation, application, and engagement. Results indicate that students in both formats reported increased confidence over time; however, students in the AI-based assessment section more frequently described deeper conceptual understanding, improved ability to explain anatomical and physiological concepts, and reduced test anxiety. Qualitative findings highlight explanation and dialogue as key mechanisms supporting learning, with students characterizing chatbot assessments as more reflective of real-world healthcare communication. These findings align with constructivist learning theory and cognitive load theory, suggesting that AI-supported conversational assessments may better capture applied competency than traditional exams. This study contributes empirical evidence to ongoing discussions about ethical, equitable, and pedagogically thorough integration of AI into online assessments.
Keywords: artificial intelligence, assessment, anatomy and physiology, online learning, mixed methods
REFERENCES
[1] Aldossary, A. S., Aljindi, A. A., & Alamri, J. M. (2024). The role of generative AI in education: Perceptions of Saudi students. Contemporary Educational Technology, 16(4). https://doi.org/10.30935/cedtech/15496
[2] Brown, S. J., White, S., & Power, N. (2017). Introductory anatomy and physiology in an undergraduate nursing curriculum. Advances in Physiology Education, 41(1), 56–61. https://doi.org/10.1152/advan.00112.2016
[3] Chapagai, S. D., & Adhikari, B. (2024). Exploring the role of artificial intelligence in education. International Research Journal of MMC, 5(5), 99–108. https://doi.org/10.3126/irjmmc.v5i5.73633
[4] Chauhan, T., Hasan, S., Vardhan, G., et al. (2024). Impact of assessment formats on academic performance and stress. Journal of Medical Evidence, 5(4), 315–319.
[5] Darejeh, A., Moghadam, T. S., Delaramifar, M., & Mashayekh, S. (2024). AI-powered adaptive e-learning systems. ICeLeT Proceedings, 1–6.
[6] Gervacio, A. P. (2024). Generative AI and motivated engagement in science learning. Environment and Social Psychology, 9(11).
[7] Masters, K. (2019). Artificial intelligence in medical education. Medical Teacher, 41(9), 976–980. https://doi.org/10.1080/0142159X.2019.1595557
[8] Miller, G. E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine, 65(9), S63–S67.
[9] Perrella, A., Koenig, J., Kwon, H., et al. (2015). On being examined. Advances in Physiology Education, 39(4), 320–326.
[10] Sweller, J. (1988). Cognitive load during problem-solving. Cognitive Science, 12(2), 257–285.
[11] Zhai, X., Chu, X., Chai, C. S., et al. (2021). A review of AI in education from 2010 to 2020. Complexity, 2021, Article 8812542. https://doi.org/10.1155/2021/8812542
The Future of Education




























