Using AI Large Language Model (LLM-ChatGPT) to Mitigate Spelling Errors of EFL Learners
Hasan Mohammed Saleh Jaashan, King Khalid University; Sanaa University (Yemen) (Saudi Arabia)
Abstract
Despite considerable efforts invested in English language teaching, the prevalence of spelling errors poses a significant obstacle for EFL learners, due to the intricate nature of the English writing system, characterized by a lack of direct one-to-one correspondence between spoken and written forms. Additionally, this issue is exacerbated by the less emphasis placed on the development of writing skills. Artificial Intelligence Large Language Models (AI-LLM) are now widely employed in various language learning tasks, including text generation, machine translation, and long-text summarization. This study aims to harness LLM-GPT (Language Model GPT) for writing skills to mitigate spelling errors by providing automated feedback, spelling assistance, and opportunities for regular practice. It also aims to gauge the perceptions and attitudes of EFL learners towards using LLM-GPT as a reinforcement approach to minimize spelling errors and improve writing proficiency. A total of 60 EFL students would be involved and a between subject design method using control and experimental groups would be used in this study. The findings of the study indicated that learners who were taught using LLM_GPT application outscored their counterpart in another group and easily remembered the spelling of words as shown in the post-test session. Moreover, the learners felt the LLM_GPT application had a positive impact on learning spelling of words.
Key words: Language learning model (LLM), Chat Generative Pre-training Transformer (GPT), mitigate spelling errors, Artificial Intelligence (IT), EFL learners.
REFERENCES
[1] Agrawal, R., Biswas, G., & Hall, M. (2020). Intelligent tutoring systems for second language acquisition: A systematic review and meta-analysis. Computers & Education, 144, 103707. doi:10.54850/jrspelt.7.39.003.
[2] Bengio, Y, Courville, A, & Vincent P. (2023. Representation learning: A review and new perspectives." IEEE transactions on pattern analysis and machine intelligence 35. (8), 1798-1828. doi:10.1109/TPAMI.2013.50
[3] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33,1877-1901. https://doi.org/10.48550/arXiv.2005.14165
[5] Cascella, M., Montomoli, J., Bellini, V., & Bignami, E. (2023). Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1), 33. https://doi.org/10.1007/s10916-023-01925-4.
[.....]