A histogram transformer approach using attention-based 3D residual network for human action recognition

基于注意力机制的3D残差网络直方图变换器方法用于人体动作识别

阅读:1

Abstract

This paper proposes a lightweight video action recognition framework that integrates 3D Convolutional Neural Networks (CNNs), the Histogram Transformer Block (HTB), and the Split-Attention Residual Block (SAB), while also introducing Spatiotemporal Tensor Factorization (ST-Factor) technology in an innovative manner. The method first incorporates the HTB module into each computational unit of the AR3D backbone network to leverage local statistical features for improve the granularity of spatiotemporal modeling. Next, the SAB module is introduced into the residual path to utilize dynamic channel re-weighting for optimizing feature selection across dimensions. Finally, the ST-Factor decouples the 4D convolution kernels into independent spatial (H [Formula: see text] W) and temporal (T [Formula: see text] C) operations, which significantly reducing computational redundancy. Experiments on the UCF101/HMDB51 datasets demonstrate that the proposed method not only maintains real-time inference speed but also outperforms existing state-of-the-art (SOTA) methods in recognition accuracy, providing a new paradigm for video understanding research.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。