Video swin-CLSTM transformer: Enhancing human action recognition with optical flow and long-term dependencies

视频 swin-CLSTM Transformer:利用光流和长期依赖性增强人体动作识别

阅读:2

Abstract

As video data volumes soar exponentially, the significance of video content analysis, particularly Human Action Recognition (HAR), has become increasingly prominent in fields such as intelligent surveillance, sports analytics, medical rehabilitation, and virtual reality. However, current deep learning-based HAR methods encounter challenges in recognizing subtle actions within complex backgrounds, comprehending long-term semantics, and maintaining computational efficiency. To address these challenges, we introduce the Video Swin-CLSTM Transformer. Based on the Video Swin Transformer backbone, our model incorporates optical flow information at the input stage to effectively counteract background interference, employing a sparse sampling strategy. Combined with the backbone's 3D Patch Partition and Patch Merging techniques, it efficiently extracts and fuses multi-level features from both optical flow and raw RGB inputs, thereby enhancing the model's ability to capture motion characteristics in complex backgrounds. Additionally, by embedding Convolutional Long Short-Term Memory (ConvLSTM) units, the model's capacity to capture and understand long-term dependencies among key actions in videos is further enhanced. Experiments on the UCF-101 dataset demonstrate that our model achieves mean Top-1/Top-5 accuracies of 92.8% and 99.4%, which are 3.2% and 2.0% higher than those of the baseline model, while the computational cost is reduced by an average of 3.3% at peak performance compared to models without optical flow. Ablation studies further validate the effectiveness of our model's crucial components, with the integration of optical flow and the embedding of ConvLSTM modules yielding maximum improvements in mean Top-1 accuracy of 2.6% and 1.9%, respectively. Notably, employing our custom ImageNet-1K-LSTM pre-training model results in a maximum increase of 2.7% in mean Top-1 accuracy compared to traditional ImageNet-1K pre-training model. These experimental results indicate that our model offers certain advantages over other Swin Transformer-based methods for video HAR tasks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。