Abstract
This paper proposes a lightweight video action recognition framework that integrates 3D Convolutional Neural Networks (CNNs), the Histogram Transformer Block (HTB), and the Split-Attention Residual Block (SAB), while also introducing Spatiotemporal Tensor Factorization (ST-Factor) technology in an innovative manner. The method first incorporates the HTB module into each computational unit of the AR3D backbone network to leverage local statistical features for improve the granularity of spatiotemporal modeling. Next, the SAB module is introduced into the residual path to utilize dynamic channel re-weighting for optimizing feature selection across dimensions. Finally, the ST-Factor decouples the 4D convolution kernels into independent spatial (H [Formula: see text] W) and temporal (T [Formula: see text] C) operations, which significantly reducing computational redundancy. Experiments on the UCF101/HMDB51 datasets demonstrate that the proposed method not only maintains real-time inference speed but also outperforms existing state-of-the-art (SOTA) methods in recognition accuracy, providing a new paradigm for video understanding research.