Abstract
Fatigue creates complex challenges that present themselves through cognitive problems alongside physical impacts and emotional consequences. FatigueNet represents a modern multimodal framework that deals with two main weaknesses in present-day fatigue classification models by addressing signal diversity and complex signal interdependence in biosignals. The FatigueNet system uses a combination of Graph Neural Network (GNN) and Transformer architecture to extract dynamic features from Electrocardiogram (ECG) Electrodermal Activity (EDA) and Electromyography (EMG) and Eye-Blink signals. The proposed method presents an improved model compared to those that depend either on manual feature construction or individual signal sources since it joins temporal, spatial, and contextual relationships by using adaptive feature adjustment mechanisms and meta-learned gate distribution. The performance of FatigueNet outpaces existing benchmarks according to laboratory tests using the MePhy dataset to detect fatigue levels across four different categories during training and testing. The end-to-end latency performance of FatigueNet on MePhy benchmark reaches 50 ms per 20 s window on conventional hardware which proves its capability for real-time fatigue monitoring and surpasses baseline models.