In the wake of heightened speculation surrounding ChatGPT, a prominent language model, this paper delves into the potential implications for portfolio assessment within university courses. ChatGPT, employing statistical computations to generate written text, has raised concerns about its impact on exam submissions conducted outside conventional settings [1]. This study, conducted from December 2022 to December 2023, assesses the viability of ChatGPT and similar language models in the context of portfolio assessment for university courses, specifically within ICT and Learning 1, ICT and Learning 2, and the MBA course ORG5005 – Digital Preparedness.
Seven language model systems underwent evaluation based on their ability to generate proficient academic responses, emphasizing critical thinking and practical application. Tasks spanned information security, copyright matters, project work in ICT education, and digital preparedness. Despite the systems' endeavours, the results revealed resolute failure in addressing the specified tasks. Students relying solely on ChatGPT or similar systems without scholarly exploration and source analysis would receive an overall failing grade, lacking academic reflection and practical solutions.
While the assessed systems fell short in producing satisfactory academic responses, this study refrains from dismissing the potential for future advancements. The failure of the current models does not negate the possibility that continued progress in ChatGPT, and comparable systems may address the extensive work requirements characteristic of the ICT and Learning study programmes and ORG5005 – Digital Preparedness in subsequent developments.
Keywords |
Portfolio Assessment, Continuing education, Language Models |
References |
[1] Susnjak, Teo, 2022. ChatGPT: The End of Online Exam Integrity? Retrieved from: https://www.researchgate.net/publication/366423865_ChatGPT_The_End_of_Online_Exam_Integrity |