Abstract
Sleep staging is a critical indicator for assessing sleep quality and sleep disorders. Although significant progress has been made in sleep staging research, the representation of prominent waveforms and the capture of dynamic transitions between sleep stages still pose challenges. To address these issues, we propose MCTSleepNet, an Sleep staging Network containing Multiscale waveform representation, Composite attention and Time dependency learning modules based on single-channel electroencephalography (EEG). Firstly, multiscale waveform representation is learned from EEG signals using a dual-scale convolutional neural network (CNN). Then, a Composite Attention module is employed to enhance signal feature representation by considering both local and global contextual dependencies, thereby more effectively capturing prominent waveform features. Finally, a Bidirectional Gated Recurrent Unit (Bi-GRU) is used to learn the time dependent feature between EEG signals, enabling MCTSleepNet to model dynamic transitions between different sleep stages. Furthermore, considering the data imbalance between different sleep stages, this paper introduces an adaptive cross-entropy polynomial loss function to adjust the weights of different classes, thereby enhancing the model's attention to minority classes. Evaluation results on the publicly available Sleep-EDF-20 and Sleep-EDF-78 datasets demonstrate that MCTSleepNet performs exceptionally well in the sleep staging task.