Toward Generalized Emotion Recognition in VR by Bridging Natural and Acted Facial Expressions

通过融合自然面部表情和表演面部表情,实现虚拟现实中的通用情绪识别

阅读:2

Abstract

Recognizing emotions accurately in virtual reality (VR) enables adaptive and personalized experiences across gaming, therapy, and other domains. However, most existing facial emotion recognition models rely on acted expressions collected under controlled settings, which differ substantially from the spontaneous and subtle emotions that arise during real VR experiences. To address this challenge, the objective of this study is to develop and evaluate generalizable emotion recognition models that jointly learn from both acted and natural facial expressions in virtual reality. We integrate two complementary datasets collected using the Meta Quest Pro headset, one capturing natural emotional reactions and another containing acted expressions. We evaluate multiple model architectures, including convolutional and domain-adversarial networks, and a mixture-of-experts model that separates natural and acted expressions. Our experiments show that models trained jointly on acted and natural data achieve stronger cross-domain generalization. In particular, the domain-adversarial and mixture-of-experts configurations yield the highest accuracy on natural and mixed-emotion evaluations. Analysis of facial action units (AUs) reveals that natural and acted emotions rely on partially distinct AU patterns, while generalizable models learn a shared representation that integrates salient AUs from both domains. These findings demonstrate that bridging acted and natural expression domains can enable more accurate and robust VR emotion recognition systems.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。