The Future of Education

Edition 14

Accepted Abstracts

Does ChatGPT Make the Grade?

Jude Brady, Cambridge University Press & Assessment (United Kingdom)

Alison Rodrigues, Cambridge University Press & Assessment (United Kingdom)

Martina Kuvalja, Cambridge University Press & Assessment (United Kingdom)

Sarah Hughes, Cambridge University Press & Assessment (United Kingdom)


Since the release of Chat GPT 3.5 into the public domain, there has been much interest in the capabilities of generative artificial intelligence (AI) in an assessment context (Chambers, 2022; Aloisi, 2023; Bowman, 2022). Our study builds on existing research to explore the risks and opportunities presented by AI for essay-based coursework assessments (Christodoulou, 2023; Dale, 2021; Pavlik, 2023). Our research is novel because it explores how students employ the technology to assist their coursework assessments. Three students aged 18-22 were tasked with writing two essays each for a coursework component for a Cambridge International General Certificate of Education (IGCSE) qualification. The students were given access to ChatGPT plus, which facilitated access to versions 3.5 and 4. After writing the essays, the students participated in semi-structured interviews about their experiences of using the technology. Researchers also compared the transcript of the dialogue between the students and the AI to gain further insight into the students’ use of the technology. Did they, for example, copy and paste information, cross-check ChatGPT’s claims with other sources, use the ChatGPT outputs as ‘springboard’ for their own ideas, or simply reword the AI responses? The six ChatGPT-assisted essays were seeded into a marking batch (n = 170) for experienced examiners of the assessment. The marks awarded to the Chat-GPT-informed essays were compared to those authored by candidates working without generative AI assistance. The findings provide information about how ChatGPT-informed essays perform in an assessment context. Additionally, the research contributes to an understanding of the strengths and limitations of this technology, and crucially, it can inform guidance on how to use and reference generative AI appropriately. Finally, our research informs a wider question about the future of assessment and whether with the new capabilities of AI, our assessment objectives remain robust and relevant to today’s learners.



Generative AI, assessment, coursework, ChatGPT, OpenAI, IGCSE



[1] Aloisi, C. (2023). The future of standardised assessment: Validity and trust in algorithms for assessment and scoring. European Journal of Education, 58(1), 98–110.

[2] Bowman, E. (2022, December 19). A new AI chatbot might do your homework for you. But it’s still not an A+ student. NPR.

[3] Chambers, A. (2022, December 11). Can Artificial Intelligence (Chat GPT) get a 7 on an SL Maths paper? IB Maths Resources from Intermathematics.

[4] Christodoulou, D. (2023, February 9). How good is ChatGPT at writing essays? Some data! Medium.

[5] Dale, R. (2021). GPT-3: What’s it good for? Natural Language Engineering, 27(1), 113–118. Scopus.

[6] Pavlik, J. V. (2023). Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education. Journalism and Mass Communication Educator, 78(1), 84–93. Scopus.


Back to the list


Reserved area

Media Partners:

Click BrownWalker Press logo for the International Academic and Industry Conference Event Calendar announcing scientific, academic and industry gatherings, online events, call for papers and journal articles
Pixel - Via Luigi Lanzi 12 - 50134 Firenze (FI) - VAT IT 05118710481
    Copyright © 2024 - All rights reserved

Privacy Policy