Automated information extraction from plant specimen labels using OCR and large language models

利用OCR和大型语言模型从植物标本标签中自动提取信息

阅读:1

Abstract

The digitization of herbarium specimens is crucial for advancing biodiversity research and data sharing. However, this process is often hindered by the inefficiency of manual transcription and the technical challenges posed by the massive volume of specimens, heterogeneous label layouts, and the prevalence of handwritten texts. To overcome these bottlenecks, this study proposed an automated pipeline that integrates the PadddleOCR engine with the DeepSeek large language model (LLM) for structured information extraction from specimen labels. The pipeline is designed to extract 16 key metadata fields from both printed and handwritten labels. Evaluated on a benchmark dataset, it achieved a high field-level accuracy of 95.4% for printed labels, demonstrating strong reliability. For handwritten labels, the system maintained functionality while correctly identifying its limitations through a confidence-based quality control mechanism. A key finding was the compensatory role of the LLM, which effectively corrected upstream OCR errors, as evidenced by a weak correlation (r = 0.32) between OCR (Optical Character Recognition) confidence and final extraction accuracy. This hybrid architecture ensures data security through local image processing and cost-efficiency via text-only LLM parsing. This work provides a robust, scalable, and practical solution for accelerating the digitization of botanical collections. The method is directly applicable to real-world digitization workflows and promises to significantly enhance the efficiency of biodiversity data creation and sharing.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。