CogMamba: Multi-Task Driver Cognitive Load and Physiological Non-Contact Estimation with Multimodal Facial Features

CogMamba:基于多模态面部特征的多任务驾驶员认知负荷和生理非接触式估计

阅读:1

Abstract

The cognitive load of drivers directly affects the safety and practicality of advanced driving assistant systems, especially in autonomous driving scenarios where drivers need to quickly take control of the vehicle after performing non-driving-related tasks (NDRTs). However, existing driver cognitive load detection methods have shortcomings such as the inability to deploy invasive detection equipment inside vehicles and limitations to eye movement detection, which restrict their practical application. To achieve more efficient and practical cognitive load detection, this study proposes a multi-task non-contact cognitive load and physiological state estimation model based on RGB video, named CogMamba. The model utilizes multimodal features extracted from facial video and introduces the Mamba architecture to efficiently capture local and global temporal dependencies, thereby further jointly estimating cognitive load, heart rate (HR), and respiratory rate (RR). Experimental results demonstrate that CogMamba exhibits superior performance on two public datasets and shows excellent robustness under the cross-dataset generalization test. This study provides insights for non-contact driver state monitoring in real-world driving scenarios.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。