Beyond the Red Pen: How AI is Changing the Game in Writing Assessment
Reza Farzi, University of Ottawa (Canada)
Abstract
Automated Writing Evaluation (AWE) has emerged as a valuable tool in language instruction, offering immediate feedback and streamlining the assessment process. This paper provides an overview of AWE and examines its research-supported benefits. It further introduces a comparative study between assessments conducted by a veteran English for academic purposes (EAP) instructor and those by generative AI, aiming to investigate the efficacy and advantages of AWE in language learning contexts.
The study investigates the correlation and consistency between assessments performed by an experienced human rater and generative AI in evaluating writing proficiency. Through a quantitative analysis of student writing samples, the study examines the alignment of feedback provided by human instructors and AI evaluators, as well as student perceptions of the feedback received.
Preliminary findings suggest a high degree of agreement between assessments conducted by human and AI raters, indicating the potential for generative AI to replicate the evaluation process performed by human instructors. Moreover, students express satisfaction with the feedback provided by both human and AI evaluators, highlighting the effectiveness of AWE in enhancing the writing learning experience.
Keywords |
Generative AI, Automated Writing Evaluation (AWE), English for academic purposes (EAP), writing proficiency, formative assessment, summative assessment, student perceptions |
REFERENCES |
|