PhD Candidate
Optimizing Personalized Learning at Scale
Department of Psychology
University of Amsterdam
The biggest threat to learning is not engaging in it. Crucially, sequential errors have been found to be an important cause of quitting from educational practice. However, little is known about how students are differently affected by sequential errors. Here, we investigate the underlying factors. Using intensive longitudinal practicing data from over 200,000 primary-school students in a large-scale Online Learning Environment (OLE), we confirm previous findings that sequential errors strongly increase the probability of quitting from learning. Second, we find large variability in this effect, ranging from no or small tendencies to quit to high sensitivities to quitting following sequential errors. We validate these results in an independent dataset and show that individual differences are stable across two arithmetic practice domains. Our results corroborate the theoretical notion that students differ in their tolerance to failure and pinpoint a need to individualize how computer-adaptive systems intervene after errors.
The estimation of student ability is paramount in large-scale personalized learning. To this end, state-of-the-art adaptive learning environments use item response theory (IRT). Previous work in traditional learning assessment has demonstrated that unidimensional IRT models fall short in adequately estimating ability when items on a test are skipped. In this study, we extend this work to online learning platforms. We analyze data from a large-scale online learning platform used to practice Arithmetic and Language. Using the IRTree framework, we compare the unidimensional model of accuracy to a multidimensional model which additionally accounts for the decision to respond or skip a problem. We found support for problem-skipping as a non-ignorable process: students that were more likely to problem-skip were more likely to make erroneous responses. Further exploration revealed individual differences in the strategies involved with problem-skipping. To ensure that learning analytic tools are supported by fair measurement models, we suggest several ways to account for problem-skipping when estimating student ability.