Abstract
Event-based cameras, inspired by biological vision, asynchronously capture per-pixel brightness changes, producing streams of events with higher temporal resolution, dynamic range, and lower latency than conventional cameras. These advantages make event cameras promising for human pose estimation in challenging scenarios, such as motion blur and low-light conditions. However, human pose estimation with event cameras is still in its early research stages. Among major challenges is information loss from stationary parts of the human body, where the stationary parts at instances cannot trigger events. This issue, inherent to the nature of event data, cannot be resolved in a short-range event stream alone. Therefore, incorporating motion cues from a longer temporal range offers a intuitive solution. This paper proposes a joint global and local temporal modeling network (JGLTM), designed to extract essential cues from a longer temporal range to complement and refine local features for more accurate current pose prediction. Unlike existing methods that rely only on short-range temporal correspondence, this approach expands the temporal perception field to effectively provide crucial contexts for the lost information of stationary body parts at the current time instance. Extensive experiments on public datasets and the dataset proposed in this paper demonstrate the effectiveness and superiority of the proposed approach in event-based human pose estimation across diverse scenarios.