PhD Candidate
Optimizing Personalized Learning at Scale
Department of Psychology
University of Amsterdam
When students make errors, it may not always be because they genuinely do not know the answer. Observed errors may reflect a heterogeneous mixture of cognitive (procedural mistakes, incorrect memory retrieval, or strategic behavior) and non-cognitive (inattention, demotivation, or disengagement) influences. And at the level of specific cognitive mechanisms, errors can result from one of several misconceptions — answering 18 to the multiplication item 9×9 can be the result of mistakenly adding the digits, inverting them, or simply incorrectly retrieving the answer to a different multiplication item. Over several projects, we aim to disentangle these different types of errors and their consequences for learning and motivation.
Interested in collaborating on this project? Get in touch.
The biggest threat to learning is not engaging in it. Crucially, sequential errors have been found to be an important cause of quitting from educational practice. However, little is known about how students are differently affected by sequential errors. Using intensive longitudinal practicing data from over 200,000 primary-school students in a large-scale online learning environment, we confirm that sequential errors strongly increase the probability of quitting. We find large variability in this effect — ranging from no sensitivity to very high sensitivity to quitting following sequential errors — and show that individual differences are stable across two arithmetic practice domains. Our results corroborate the theoretical notion that students differ in their tolerance to failure and pinpoint a need to individualize how computer-adaptive systems intervene after errors.
Read more →The estimation of student ability is paramount in large-scale personalized learning. State-of-the-art adaptive learning environments use item response theory (IRT), but standard models assume that skipped problems are missing at random. Using data from a large-scale online learning platform, we apply the IRTree framework to model the decision to skip as a latent process separate from accuracy. We find that problem-skipping is non-ignorable: students who skipped more were more likely to make erroneous responses, yet skipping and accuracy are distinct traits. Ignoring problem-skipping leads to biased ability estimates, particularly for students who skip frequently. We discuss several ways to account for this process to ensure fair measurement in adaptive systems.
Read more →