Deep-Learning-Derived Facial Electromyogram Signatures of Emotion in Immersive Virtual Reality (bWell): Exploring the Impact of Emotional, Cognitive, and Physical Demands

沉浸式虚拟现实(bWell)中基于深度学习的面部肌电图情绪特征:探索情绪、认知和生理需求的影响

阅读:1

Abstract

Emotional and workload-related states unfold dynamically during immersive virtual reality (VR) experiences, yet reliable physiological modeling in such environments remains challenging. We investigated whether multi-channel facial electromyography (fEMG), combined with spatio-temporal deep learning, can (i) accurately classify calibrated facial expressions across participants and (ii) transfer to spontaneous, task-elicited behavior in immersive VR. Twelve adults completed a calibration phase involving four intentional expressions (smile, frown, raised eyebrow, neutral), followed by VR scenes designed to elicit emotional, cognitive, physical, and dual task demands. After participant-level physiological normalization, a single shared Convolutional Neural Network-Temporal Convolutional Network (CNN-TCN) model was trained and evaluated using leave-one-participant-out (LOPO) validation. The model achieved strong cross-participant performance (Macro-F1 = 0.88 ± 0.13; ROC-AUC = 0.95 ± 0.06). When applied to unlabeled spontaneous VR task-elicited fEMG recordings, the trained model generated continuous expression classes. Derived static and temporal expression features showed scene-dependent modulation and False Discovery Rate (FDR)-surviving associations, primarily with perceived physical demand (NASA-TLX). The observed muscle activation patterns were physiologically plausible and aligned with Facial Action Coding System (FACS)-based interpretations of underlying muscle activity. These findings demonstrate that end-to-end spatio-temporal modeling of raw fEMG enables facial expression sensing in immersive VR using a single shared model following physiological normalization. The proposed framework bridges calibrated expression learning and spontaneous task-elicited behavior, supporting privacy-preserving, continuous and physiologically grounded monitoring in human-centered VR applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。