ARTNet for Micro-Expression Recognition

用于微表情识别的ARTNet

阅读:2

Abstract

The field of micro-expression recognition (MER) has garnered considerable attention for its potential to reveal an individual's genuine emotional state. However, MER remains a formidable challenge, primarily due to the subtle nature and brief duration of micro-expressions. Many approaches typically rely on optical flow to capture motion between video frames. However, these methods exhibit limited variability in expression intensity across frames, which may not be effective for all individuals due to significant differences in their micro-expressions. To address this issue, we propose a novel framework called the Action Amplification Representation and Transformer Network (ARTNet) to adjust the motion amplitude, making it easier to recognize each individual's micro-expressions. Firstly, we amplify the motion discrepancies between frames to enhance expression intensity. Subsequently, we calculate the optical flow of these amplified frames to depict micro-expressions more prominently. Finally, we use transformer layers to capture the relationships between different amplification features. Extensive experiments conducted on three diverse datasets confirm the efficacy of our proposed method.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。