Abstract
Accurate detection of polyps is of critical importance for the early and intermediate stages of colorectal cancer diagnosis. While colonoscopy videos offer richer visual information than static images for planning treatment, the rapid camera movement during examination introduces significant frame-level artifacts-such as motion blur, specular reflections, and scale variation-that degrade image quality and increase false positives in detection. To address these challenges within individual frames, we propose the Adaptive Video Polyp Detection Network (AVPDN), a robust framework for multi-scale polyp detection in dynamic colonoscopy imagery. AVPDN incorporates two key components: the Adaptive Feature Interaction and Augmentation (AFIA) module and the Scale-Aware Context Integration (SACI) module. The AFIA module adopts a dual-branch architecture to enhance feature representation. It employs dense self-attention for global context modeling, sparse self-attention to mitigate the influence of low query-key similarity in feature aggregation, and channel shuffle operations to facilitate inter-branch information exchange. In parallel, the SACI module is designed to strengthen multi-scale feature integration. It utilizes dilated convolutions with varying receptive fields to capture contextual information at multiple spatial scales, thereby improving the model's denoising capability. Extensive experiments on challenging public benchmarks demonstrate the effectiveness and generalization capability of our method, achieving state-of-the-art performance in detecting polyps from complex, motion-affected colonoscopy frames.