Abstract
Background/Objectives: Accurate sleep stage classification is essential for evaluating sleep quality and diagnosing sleep disorders. Despite recent advances in deep learning, existing models inadequately represent complex brain dynamics, particularly the time-lag effects inherent in neural signal propagation and regional variations in cortical activation patterns. Methods: We propose the MFST-GCN, a graph-based deep learning framework that models these neurobiological phenomena through three complementary modules. The Dynamic Dual-Scale Functional Connectivity Modeling (DDFCM) module constructs time-varying adjacency matrices using Pearson correlation across 1 s and 5 s windows, capturing both transient signal transmission and sustained connectivity states. This dual-scale approach reflects the biological reality that neural information propagates with measurable delays across brain regions. The Multi-Scale Morphological Feature Extraction Network (MMFEN) employs parallel convolutional branches with varying kernel sizes to extract frequency-specific features corresponding to different EEG rhythms, addressing regional heterogeneity in neural activation. The Adaptive Spatio-Temporal Graph Convolutional Network (ASTGCN) integrates spatial and temporal features through Chebyshev graph convolutions with attention mechanisms, encoding evolving functional dependencies across sleep stages. Results: Evaluation on ISRUC-S1 and ISRUC-S3 datasets demonstrates F1-scores of 0.823 and 0.835, respectively, outperforming state-of-the-art methods. Conclusions: Ablation studies confirm that explicit time-lag modeling contributes substantially to performance gains, particularly in discriminating transitional sleep stages.