Comparing Student Models in Different Formalisms by Predicting their Impact on Help Success

Fiche du document

Date

9 juillet 2013

Discipline
Type de document
Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1007/978-3-642-39112-5_17

Collection

Archives ouvertes




Citer ce document

Sébastien Lalle et al., « Comparing Student Models in Different Formalisms by Predicting their Impact on Help Success », HAL-SHS : sciences de l'éducation, ID : 10.1007/978-3-642-39112-5_17


Métriques


Partage / Export

Résumé En

We describe a method to evaluate how student models affect ITS decision quality - their raison d'être. Given logs of randomized tutorial decisions and ensuing student performance, we train a classifier to predict tutor decision outcomes (success or failure) based on situation features, such as student and task. We define a decision policy that selects whichever tutor action the trained classifier predicts in the current situation is likeliest to lead to a successful outcome. The ideal but costly way to evaluate such a policy is to implement it in the tutor and collect new data, which may require months of tutor use by hundreds of students. Instead, we use historical data to simulate a policy by extrapolating its effects from the subset of randomized decisions that happened to follow the policy. We then compare policies based on alternative student models by their simulated impact on the success rate of tutorial decisions. We test the method on data logged by Project LISTEN's Reading Tutor, which chooses randomly which type of help to give on a word. We report the cross-validated accuracy of predictions based on four types of student models, and compare the resulting policies' expected success and coverage. The method provides a utility-relevant metric to compare student models expressed in different formalisms.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en