River floating object detection with transformer model in real time.

阅读:4
作者:Zhang Chong, Yue Jie, Fu Jianglong, Wu Shouluan
The DEtection TRansformer (DETR) and the YOLO series have been at the forefront of advancements in object detection. The RT-DETR, a member of the DETR family, has notably addressed the speed limitations of its predecessors by utilizing a high-performance hybrid encoder that optimizes query selection. Building upon this foundation, we introduce the LR-DETR, a lightweight evolution of RT-DETR for river floating object detection. This model incorporates the High-level Screening-feature Path Aggregation Network (HS-PAN), which refines feature fusion through a novel bottom-up fusion path, significantly enhancing its expressive power. Further innovation is evident in the introduction of the Residual Partial Convolutional Network (RPCN) as the backbone, which selectively applies convolutions to key channels, leveraging the concept of residuals to reduce computational redundancy and enhance accuracy. The enhancement of the RepBlock with Conv3XCBlock, along with the integration of a parameter-free attention mechanism within the convolutional layers, underscores our commitment to efficiency, ensuring that the model prioritizes valuable information while suppressing redundancy. A comparative analysis with existing detection models not only validates the effectiveness of our approach but also highlights its superiority and adaptability. Our experimental findings are compelling: LR-DETR achieves a 5% increase in mean Average Precision (mAP) at an Intersection over Union (IoU) threshold of 0.5, a 25.8% reduction in parameter count, and a 22.8% decrease in GFLOPs, compared to the RT-DETR algorithm. These improvements are particularly pronounced in the real-time detection of river floating objects, showcasing LR-DETR's potential in specific environmental monitoring scenarios. The project page: https://github.com/zcfanhua/LR-DETR .

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。