TY - JOUR
T1 - Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments
AU - Park, Jung Yeon
AU - Dedja, Klest
AU - Pliakos, Konstantinos
AU - Kim, Jinho
AU - Joo, Sean
AU - Cornillie, Frederik
AU - Vens, Celine
AU - Van den Noortgate, Wim
N1 - Publisher Copyright:
© 2022, The Psychonomic Society, Inc.
PY - 2023/6
Y1 - 2023/6
N2 - To obtain more accurate and robust feedback information from the students’ assessment outcomes and to communicate it to students and optimize teaching and learning strategies, educational researchers and practitioners must critically reflect on whether the existing methods of data analytics are capable of retrieving the information provided in the database. This study compared and contrasted the prediction performance of an item response theory method, particularly the use of an explanatory item response model (EIRM), and six supervised machine learning (ML) methods for predicting students’ item responses in educational assessments, considering student- and item-related background information. Each of seven prediction methods was evaluated through cross-validation approaches under three prediction scenarios: (a) unrealized responses of new students to existing items, (b) unrealized responses of existing students to new items, and (c) missing responses of existing students to existing items. The results of a simulation study and two real-life assessment data examples showed that employing student- and item-related background information in addition to the item response data substantially increases the prediction accuracy for new students or items. We also found that the EIRM is as competitive as the best performing ML methods in predicting the student performance outcomes for the educational assessment datasets.
AB - To obtain more accurate and robust feedback information from the students’ assessment outcomes and to communicate it to students and optimize teaching and learning strategies, educational researchers and practitioners must critically reflect on whether the existing methods of data analytics are capable of retrieving the information provided in the database. This study compared and contrasted the prediction performance of an item response theory method, particularly the use of an explanatory item response model (EIRM), and six supervised machine learning (ML) methods for predicting students’ item responses in educational assessments, considering student- and item-related background information. Each of seven prediction methods was evaluated through cross-validation approaches under three prediction scenarios: (a) unrealized responses of new students to existing items, (b) unrealized responses of existing students to new items, and (c) missing responses of existing students to existing items. The results of a simulation study and two real-life assessment data examples showed that employing student- and item-related background information in addition to the item response data substantially increases the prediction accuracy for new students or items. We also found that the EIRM is as competitive as the best performing ML methods in predicting the student performance outcomes for the educational assessment datasets.
KW - Background information
KW - Educational assessment
KW - Explanatory item response model
KW - Item response theory
KW - Machine learning
KW - Prediction performance
UR - http://www.scopus.com/inward/record.url?scp=85134248558&partnerID=8YFLogxK
U2 - 10.3758/s13428-022-01910-8
DO - 10.3758/s13428-022-01910-8
M3 - Article
C2 - 35819719
AN - SCOPUS:85134248558
SN - 1554-351X
VL - 55
SP - 2109
EP - 2124
JO - Behavior Research Methods
JF - Behavior Research Methods
IS - 4
ER -