Adaptive Multi-Camera Fusion and Calibration for Large-Scale Multi-Vehicle Cooperative Simulation Scenarios

面向大规模多车辆协同仿真场景的自适应多摄像头融合与标定

阅读:2

Abstract

In the development of multi-vehicle cooperative hardware-in-the-loop (HIL) simulation platforms based on machine vision, accurate vehicle pose estimation is crucial for achieving efficient cooperative control. However, monocular vision systems inevitably suffer from limited fields of view and insufficient image resolution during target detection, making it difficult to meet the requirements of large-scale, multi-target real-time perception. To address these challenges, this paper proposes an engineering-oriented multi-camera cooperative vision detection method, designed to maximize processing efficiency and real-time performance while maintaining detection accuracy. The proposed approach first projects the imaging results from multiple cameras onto a unified physical plane. By precomputing and caching the image stitching parameters, the method enables fast and parallelized image mosaicking. Experimental results demonstrate that, under typical vehicle speeds and driving angles, the stitched images achieve a 93.41% identification code recognition rate and a 99.08% recognition accuracy. Moreover, with high-resolution image (1440 × 960) inputs, the system can stably output 30 frames per second of stitched image streams, fully satisfying the dual requirements of detection precision and real-time processing for engineering applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。