Pre-Meta: priors-augmented retrieval for LLM-based metadata generation

Pre-Meta:基于LLM的元数据生成的先验增强检索

阅读:1

Abstract

MOTIVATION: While high-throughput sequencing technologies have dramatically accelerated genomic data generation, the manual processes required for dataset annotation and metadata creation impede the efficient discovery and publication of these resources across disparate public repositories. Large language models (LLMs) have the potential to streamline dataset profiling and discovery. However, their current limitations in generalizing across specialized knowledge domains, particularly in fields such as biomedical genomics, prevent them from fully realizing this potential. This article presents Pre-Meta, an LLM-agnostic and domain-independent data annotation pipeline with an enriched retrieval procedure that leverages related priors-such as pre-generated metadata tags and ontologies-as auxiliary information to improve the accuracy of automated metadata generation. RESULTS: Validated using five selected metadata fields sampled across 1500 papers, the Pre-Meta assisted annotation experiment-without finetuning and prompt optimization-demonstrates a systemic improvement in the annotation task: shown through a 23%, 72%, and 75% accuracy gain from conventional RAG adoptions of GPT-4o mini, Llama 8B, and Mistral 7B respectively. AVAILABILITY AND IMPLEMENTATION: The code, data access, and scripts are available at: https://github.com/SINTEF-SE/LLMDap.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。