Abstract
Lecture capture (LC) systems offer students flexible review of lecture content, but their impact on learning outcomes remains mixed. LC engagement and exam performance were analyzed in three in-person courses with LC videos posted for review, each with three lecture blocks and three independent noncumulative exams. Zoom analytics and exam grade data were collected for 299 students across 982 noncumulative exam observations. Four LC metrics were derived per exam: total view duration, number of lectures viewed, number of unique views, and days between access and exam. Average exam scores were compared between LC viewers (n = 216) and nonviewers (n = 83): LC viewers scored significantly higher than nonviewers (66.1% vs. 59.4%). A linear mixed-effects model with student-level random intercepts showed opposing effects of total viewing time (+1.74% per hour) and number of lectures viewed (-1.92% per lecture), implying that average LC view duration per lecture (total minutes watched ÷ lectures viewed) was the strongest predictor of exam score. A post hoc median split of average LC view duration per lecture indicated an 8.02% higher score for students above the median. Decomposition of total LC view time revealed a between-student effect on exam grade (+2.52% per hour) and a within-student effect (-0.84% per hour), showing that spikes above a student's own average view time are associated with a lower exam grade. These findings align with self-regulated learning theory, demonstrating that while greater LC viewing time generally benefits performance, its impact depends on strategic, habitual engagement rather than episodic cramming.