Abstract
High-channel-density (HCD) electroencephalography (EEG) enables fine-grained neural sensing but is constrained by high hardware costs, spatial complexity, and limited portability. This study developed a deep learning-based method to reconstruct high-density EEG signals from low-channel-density (LCD) inputs, enabling more practical and affordable brain-monitoring systems. This study introduces VEEG-A-U-Net, a lightweight U-Net architecture enhanced with attention gates and residual learning. The model combined spherical spline interpolation with a learnable correction signal to adaptively model spatial-temporal features. The framework was trained and evaluated on the SEED dataset, using normalized mean square error (NMSE), signal-to-noise ratio (SNR), and Pearson correlation coefficient (PCC) to assess reconstruction performance. Validation was conducted through leave-one-subject-out cross-validation (LOSO-CV) and cross-dataset experiments to examine generalizability. Under the same reconstruction setting (scale factor = 2), VEEG-A-U-Net achieved competitive reconstruction performance compared with state-of-the-art methods, while requiring substantially fewer parameters and computational operations. Cross-dataset evaluations confirmed stable performance across different EEG paradigms. Inference-time analysis showed low computational latency, indicating practical feasibility for deployment in resource-constrained and edge computing environments. A preliminary clinical EEG evaluation was also conducted to explore feasibility in clinical settings.The proposed framework offers an effective and lightweight solution for reconstructing high-density EEG from sparse measurements. These findings may support the development of sensor-efficient and portable EEG systems for practical neuroengineering and brain–computer interface applications. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10916-026-02374-5.