Abstract
For a humanoid robot, it is difficult to predict a motion trajectory through end-to-end imitation learning when performing complex operations and multi-step processes, leading to jittering in the robot arm. To alleviate this problem and reduce the computational complexity of the self-attention module in Vision-Language-Action (VLA) operations, we proposed a memory-gated filtering attention model that improved the multi-head self-attention mechanism. Then, we designed a cross-modal alignment perception during training, combined with a few-shot data-collection strategy for key steps. The experimental results showed that the proposed scheme significantly improved the task success rate and alleviated the robot arm jitter problem, while reducing video memory usage by 72% and improving training speed from 1.35 s to 0.129 s per batch. This maintained higher action accuracy and robustness in the humanoid robot.