MentalRoBERTa-Caps: A capsule-enhanced transformer model for mental health classification

MentalRoBERTa-Caps:一种用于心理健康分类的胶囊增强型Transformer模型

阅读:1

Abstract

In recent years, the dominance of Large Language Models (LLMs) such as BERT and RoBERTa has led to remarkable improvements in NLP tasks, including mental illness detection from social media text. However, these models are often computationally intensive, requiring significant processing time and resources, which limits their applicability in real-time or resource-constrained environments. This paper proposes a lightweight yet effective hybrid model that integrates a 6-layer RoBERTa encoder with a capsule network architecture to balance performance, interpretability, and computational efficiency. The contextual embeddings generated by RoBERTa are transformed into primary capsules, and dynamic routing is employed to generate class capsule outputs that capture high-level abstractions. To validate performance and explainability, we employ LIME (Local Interpretable Model-Agnostic Explanations) to provide insights into feature contributions and model decisions. Experimental results on benchmark mental health datasets demonstrate that our approach achieves high accuracy while significantly reducing inference time, making it suitable for deployment in real-world mental health monitoring systems.1.To design a computationally efficient architecture for mental illness detection using a lightweight RoBERTa encoder integrated with capsule networks.2.To perform a detailed time complexity analysis highlighting the trade-offs between performance and efficiency.3.To enhance model interpretability through LIME-based feature attribution, supporting transparent and ex- plainable predictions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。