Harnessing Large-Language Models for Efficient Data Extraction in Systematic Reviews: The Role of Prompt Engineering

利用大型语言模型在系统评价中高效提取数据:提示工程的作用

阅读:1

Abstract

INTRODUCTION: Systematic literature reviews (SLRs) of randomized clinical trials (RCTs) underpin evidence-based medicine but can be limited by the intensive resource demands of data extraction. Recent advances in accessible large-language models (LLMs) hold promise for automating this step, however testing is limited across different outcomes and disease areas. METHODS: This study developed prompt engineering strategies for GPT-4o to extract data from RCTs across three disease areas: non-small cell lung cancer, endometrial cancer and hypertrophic cardiomyopathy. Prompts were iteratively refined during the development phase, then tested on unseen data. Performance was evaluated via comparison to human extraction of the same data, using F1 scores, precision, recall and percentage accuracy. RESULTS: The LLM was highly effective for extracting study and baseline characteristics, often equaling human performance, with test F1 scores exceeding 0.85. Complex efficacy and adverse event data proved more challenging, with test F1 scores ranging from 0.22 to 0.50. Transferability of prompts across disease areas was promising but varied, highlighting the need for disease-specific refinement. CONCLUSION: Our findings demonstrate the potential of LLMs, guided by rigorous prompt engineering, to augment the SLR process. However, human oversight remains essential, particularly for complex and nuanced data. As these technologies evolve, continued validation of AI tools will be necessary to ensure accuracy and reliability, and safeguarding of the quality of evidence synthesis.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。