Abstract
Object detection in high-speed and dynamic environments remains a core challenge in computer vision. Conventional frame-based cameras often suffer from motion blur and high latency, while event cameras capture brightness changes asynchronously with microsecond resolution, high dynamic range, and ultra-low latency, offering a promising alternative. Despite these advantages, existing event-based detection methods still suffer from high computational cost, limited temporal modeling, and unsatisfactory real-time performance. We present PMRVT (Parallel Attention Multilayer Perceptron Recurrent Vision Transformer), a unified framework that systematically balances early-stage efficiency, enriched spatial expressiveness, and long-horizon temporal consistency. This balance is achieved through a hybrid hierarchical backbone, a Parallel Attention Feature Fusion (PAFF) mechanism with coordinated dual-path design, and a temporal integration strategy, jointly ensuring strong accuracy and real-time performance. Extensive experiments on Gen1 and 1 Mpx datasets show that PMRVT achieves 48.7% and 48.6% mAP with inference latencies of 7.72 ms and 19.94 ms, respectively. Compared with state-of-the-art methods, PMRVT improves accuracy by 1.5 percentage points (pp) and reduces latency by 8%, striking a favorable balance between accuracy and speed and offering a reliable solution for real-time event-based vision applications.