Research on pedestrian recognition in complex scenarios based on data augmentation using large language models

基于大型语言模型的数据增强方法,对复杂场景下的行人识别进行研究

阅读:1

Abstract

Pedestrian detection in complex scenes is a major challenge in the field of computer vision, and the existing algorithms have problems such as high leakage rate and a large number of model parameters. In this paper, an improved REG-YOLO model based on the YOLO model framework is proposed after experimental validation of several models. By coordinating multiple modules, the detection performance in complex scenarios is improved in terms of improving detection accuracy and lightweight model. The generalization of the improved model is further validated by data enhancement through a large language model image generation technique. The proposed optimization strategies enhance the model's feature extraction capability, significantly reduce both the computational complexity and the number of model parameters, and further improve the detection stability of the model in complex scenes. Experimental results demonstrate that, compared to the baseline model, the improved model achieves respective increases of 0.5%, 1.3%, 1.1%, and 0.9% in mAP@0.5, mAP@0.5:0.95, Precision, and Recall. Meanwhile, the number of parameters and the computational complexity of the model are reduced by 29.2% and 26.4%, respectively. Compared to lightweight models, the REG-YOLO model achieves modest improvements in recall and mAP@0.5 while maintaining comparable accuracy to YOLOv11n and YOLOv12n. Notably, its recall outperforms both by 1.6% and 1.0% respectively. This demonstrates that the enhanced model improves detection accuracy and speed, exhibits superior stability in complex scenarios, more effectively reduces pedestrian detection failures in challenging environments, while maintaining low energy consumption. Moreover, addressing the issue of sample deficiency in complex scenarios within existing datasets, an augmented dataset covering extreme conditions was constructed based on large language models. By precisely controlling the scene characteristics and target structures of generated images through textual prompts, sample gaps were supplemented. This further enhanced REG-YOLO's generalization capability on the augmented dataset, validating its efficient utilization of computational resources.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。