Integrating Reinforcement Learning with Dynamic Knowledge Tracing for personalized learning path optimization

将强化学习与动态知识追踪相结合,实现个性化学习路径优化

阅读:1

Abstract

Personalized learning systems aim to improve student engagement and outcomes by adapting to individual learning needs. Traditional models, however, struggle to handle the dynamic nature of student learning and task sequencing. Knowledge Tracing (KT) is foundational for predicting student performance, but existing approaches lack the flexibility to account for evolving student knowledge. We propose RL-DKT, a novel framework that integrates Dynamic Knowledge Tracing (DKT) with Reinforcement Learning (RL). While DKT tracks the temporal evolution of a student's knowledge state, RL dynamically selects tasks based on individual performance, optimizing the learning path. The RL agent adapts task difficulty in real-time to ensure maximum retention and engagement. We evaluate the RL-DKT framework using three real-world educational datasets: ASSISTments, KDD Cup 2010, and Cognitive Tutor. These datasets represent diverse learning environments and provide insights into student performance and task complexity. Our experiments involve comparing RL-DKT to traditional KT models, including Bayesian Knowledge Tracing (BKT) and DKT. The results show that RL-DKT outperforms conventional KT models across several metrics, including prediction accuracy, task completion time, student engagement, and learning path optimization. Specifically, RL-DKT improves task completion time by 12.5%, reduces dropout rates by 50%, and enhances prediction accuracy by 7.6% compared to baseline models.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。