New Perspectives in Science Education

Edition 13

Accepted Abstracts

Static vs. Interactive: Which items are more difficult for majority and minority students?

Nani Teig, Department of Teacher Education and School Research University of Oslo (Norway)


A growing number of computer-based assessments have taken advantage of technological innovations by adopting new assessment formats which include interactive and dynamic materials. Through interactive items that simulate real-world problems, technology-based science tests offer a potential advantage in assessing complex skills in science that are difficult or even impossible to measure in traditional static items. Although interactive items seem to benefit students by assessing their complex reasoning skills more broadly than static items, the results of previous research regarding the effect of item interactivity on students’ performance have been mixed. While some researchers found that students who used an interactive system performed significantly better than those using static systems, others found the opposite. Instead of assuming that interactive items are easier than static items, or vice versa, there is still a need for more research on disentangling the effect of item interactivity on item difficulty particularly by taking into account the item characteristics, such as item format, cognitive demand, type of knowledge and competency required to solve the item.
Additionally, students’ language background could play a crucial role in explaining item difficulty. Students whose first language differs from the language of assessment face severe challenges related to reading and writing load. Interactive items that maximize the use of simulated phenomenon might reduce the reading load, making the items easier for minority-language students. Nevertheless, little research has been done to understand how item interactivity (static vs. interactive) and language groups of students (majority vs. minority) affect the differences in item difficulties, particularly in computer-based assessment in science that intended to tap into complex reasoning skills. This project consequently focuses on understanding the effect of item interactivity on item difficulty using large-scale student data from the Norwegian PISA 2015 sample with explanatory item response theory methods. Item interactivity and response format might have a significant effect in reducing the performance gap between majority and minority students are investigated.

Back to the list

Reserved area

Media Partners:

Pixel - Via Luigi Lanzi 12 - 50134 Firenze (FI) - VAT IT 05118710481
    Copyright © 2023 - All rights reserved

Privacy Policy