MultiFAR: Multidimensional information fusion with attention-driven representation learning for student performance prediction

MultiFAR:基于注意力驱动表征学习的多维信息融合学生成绩预测

阅读:1

Abstract

The advancement in computing technology, online learning platforms, and pedagogical tools enable educators and learners to connect without temporal and geographical boundaries. The existing deep learning models to predict student performance are either simple recurrent neural networks or artificial neural networks employing demographic and hand-crafted features. This manuscript proposes a model, MultIFAR, that infuses multi-dimensional information representing different aspects of student behavior with an attention-driven deep learning model integrating bidirectional long short-term memory and convolutional networks to learn student representation efficiently. MultIFAR employs student demographic, assessment, and VLE-interaction to understand different aspects of student behavior from multifaceted data. MultIFAR includes bidirectional long short-term memory to process and capture patterns from demographic, assessment, and interaction information. The model applies a convolutional operation on the daily interaction information with the VLE. We also implement the attention mechanism to assign weight to significant features. The empirical evaluation over the Open University Learning Analytics (OULA) dataset establishes the efficacy of MultIFAR against the state-of-the-art approaches and baseline methods. Considering accuracy, MultIFAR reports results from 80.31% to 97.12% over the four different problems of student performance prediction. The ablation analysis reveals that diurnal interaction shows the highest, whereas demographic attributes show the least impact on MultIFAR accuracy. We also extend MultIFAR to predict at-risk and high-performing students early. We also evaluate the model over the balanced dataset and multiclass scenario.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。