A CrossMod-Transformer deep learning framework for multi-modal pain detection through EDA and ECG fusion

一种基于交叉模态-Transformer深度学习框架,通过EDA和ECG融合进行多模态疼痛检测。

阅读:2

Abstract

Pain is a multifaceted phenomenon that significantly affects a large portion of the global population. Objective pain assessment is essential for developing effective management strategies, which in turn contribute to more efficient and responsive healthcare systems. However, accurately evaluating pain remains a complex challenge due to subtle physiological and behavioural indicators, individual-specific pain responses, and the need for continuous patient monitoring. Automatic pain assessment systems offer promising, technology-driven solutions to support and enhance various aspects of the pain evaluation process. Physiological indicators offer valuable insights into pain-related states and are generally less influenced by individual variability compared to behavioural modalities, such as facial expressions. Skin conductance, regulated by sweat gland activity, and the heart's electrical signals are both influenced by changes in the sympathetic nervous system. Biosignals, such as electrodermal activity (EDA) and electrocardiogram (ECG), can, therefore, objectively capture the body's physiological responses to painful stimuli. This paper proposes a novel multi-modal ensemble deep learning framework that combines electrodermal activity and electrocardiogram signals for automatic pain recognition. The proposed framework includes a uni-modal approach (FCN-ALSTM-Transformer) comprising a Fully Convolutional Network, Attention-based LSTM, and a Transformer block to integrate features extracted by these models. Additionally, a multi-modal approach (CrossMod-Transformer) is introduced, featuring a dedicated Transformer architecture that fuses electrodermal activity and electrocardiogram signals. Experimental evaluations were primarily conducted on the BioVid dataset, with further cross-dataset validation using the AI4PAIN 2025 dataset to assess the generalisability of the proposed method. Notably, the CrossMod-Transformer achieved an accuracy of 87.52% on Biovid and 75.83% on AI4PAIN, demonstrating strong performance across independent datasets and outperforming several state-of-the-art uni-modal and multi-modal methods. These results highlight the potential of the proposed framework to improve the reliability of automatic multi-modal pain recognition and support the development of more objective and inclusive clinical assessment tools.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。