Abstract
Personalized learning systems aim to improve student engagement and outcomes by adapting to individual learning needs. Traditional models, however, struggle to handle the dynamic nature of student learning and task sequencing. Knowledge Tracing (KT) is foundational for predicting student performance, but existing approaches lack the flexibility to account for evolving student knowledge. We propose RL-DKT, a novel framework that integrates Dynamic Knowledge Tracing (DKT) with Reinforcement Learning (RL). While DKT tracks the temporal evolution of a student's knowledge state, RL dynamically selects tasks based on individual performance, optimizing the learning path. The RL agent adapts task difficulty in real-time to ensure maximum retention and engagement. We evaluate the RL-DKT framework using three real-world educational datasets: ASSISTments, KDD Cup 2010, and Cognitive Tutor. These datasets represent diverse learning environments and provide insights into student performance and task complexity. Our experiments involve comparing RL-DKT to traditional KT models, including Bayesian Knowledge Tracing (BKT) and DKT. The results show that RL-DKT outperforms conventional KT models across several metrics, including prediction accuracy, task completion time, student engagement, and learning path optimization. Specifically, RL-DKT improves task completion time by 12.5%, reduces dropout rates by 50%, and enhances prediction accuracy by 7.6% compared to baseline models.