LyricEmotionNet for robust emotion recognition with hybrid CapsNet-memory network architecture

LyricEmotionNet 采用混合 CapsNet-内存网络架构,实现稳健的情感识别

阅读:1

Abstract

With the rapid development of music streaming platforms, accurate understanding of lyric emotions has become crucial for enhancing personalized services in music recommendation systems. However, existing methods show significant limitations in processing local emotional features and long-range dependencies, particularly performing poorly when dealing with incomplete song information. This paper proposes LyricEmotionNet, a hybrid deep learning architecture based on CapsNet and Memory Networks, to address the challenges of local feature extraction and long-range dependency modeling in lyric emotion analysis tasks. The model achieves precise capture of local emotional features through CapsNet while utilizing Memory Networks to process long-sequence emotional dependencies, achieving a classification accuracy of 94.29% on a dataset comprising 660 songs across six emotion categories. Moreover, the model maintains a performance level of 90.20% in scenarios with missing data, significantly outperforming existing methods. Through systematic comparative experiments and ablation studies, we validate the model's advantages in terms of accuracy and robustness. The research findings provide new technical insights for music emotion analysis and personalized recommendation systems, while offering valuable reference for studies dealing with incomplete textual information.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。