SleepMFormer: An Efficient Attention Framework with Contrastive Learning for Single-Channel EEG Sleep Staging

SleepMFormer:一种用于单通道脑电睡眠分期的高效注意力框架,采用对比学习

阅读:3

Abstract

BACKGROUND/OBJECTIVES: Sleep stage classification is crucial for assessing sleep quality and diagnosing related disorders. Electroencephalography (EEG) is currently recognized as a primary method for sleep stage classification. High-performance automatic sleep staging methods based on EEG leverage the powerful contextual modeling capabilities of Transformer Encoder architectures. However, the global self-attention mechanism in Transformers incurs significant computational overhead, substantially hindering the training and inference efficiency of automatic sleep staging algorithms. METHODS: To address these issues, we introduce an end-to-end framework for automatic sleep stage classification using single-channel EEG: SleepMFormer. At the algorithmic level, SleepMFormer adopts a task-driven simplification of the Transformer encoder to improve attention efficiency while preserving sequence modeling capability. At the training level, supervised contrastive learning is incorporated as an auxiliary strategy to enhance representation robustness. From an engineering perspective, these design choices enable efficient training and inference under resource-constrained settings. RESULTS: When integrated with the SleePyCo backbone, the proposed framework achieves competitive performance on three widely used public datasets: Sleep-EDF, PhysioNet, and SHHS. Notably, SleepMFormer reduces training and inference time by up to 33% compared to conventional self-attention-based models. To further validate the generalizability of MaxFormer, we conduct additional experiments using DeepSleepNet and TinySleepNet as alternative feature extractors. Experimental results demonstrate that MaxFormer consistently maintains performance across different model architectures. CONCLUSIONS: Overall, SleepMFormer introduces an efficient and practical framework for automatic sleep staging, demonstrating strong potential for related clinical applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。