Detecting stress from videos via intra-subject and inter-subject learning

通过组内和组间学习从视频中检测压力

阅读:1

Abstract

Mental stress poses a growing threat to public health, yet video based stress detection remains challenging because of substantial inter individual variability in physiological and expressive responses. To address this issue, we propose a novel two level learning framework grounded in allostasis theory, where stress is modeled as a personalized deviation from an individual's physiological baseline rather than as an absolute state. We introduce CalmScore, a metric based on resting heart rate variability, to robustly identify each subject's most relaxed reference state. Building on this reference, an intra subject Physiological Discrepancy based Representation Adaptive Modulation module computes multimodal deviations between the current state and the resting state, and further modulates them using resting heart rate variability as a proxy for regulation capacity. In addition, an inter subject analogical reasoning mechanism based on In Context Instruction Tuning retrieves physiologically similar peers and provides stressed, unstressed, and resting examples for contextual calibration. Extensive experiments on the UVSD, RSL, and MUSE datasets demonstrate the effectiveness of the proposed framework. Our method achieves state of the art F1 scores of 96.85% on UVSD and 88.67% on RSL, surpassing the strongest baseline. Ablation studies further verify the necessity of each component. The results show that robust video based stress detection benefits from modeling individualized deviations and interpreting them through group level physiological analogy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。