Human-AI collaboration enhances the performance of large language models in risk of bias assessment

人机协作可提升大型语言模型在偏见风险评估方面的性能

阅读:1

Abstract

BACKGROUND: Risk of bias (RoB) assessment is essential in systematic reviews and clinical guideline development. Current manual processes are complex, inefficient, and inconsistent. Large language models (LLMs) have potential to assist RoB assessment, but their standalone performance is limited. Efficient integration of LLMs with human judgment remains a challenge. METHODS: A high-quality dataset of medical literature was developed, encompassing seven bias domains as defined by the Cochrane RoB 1.0 tool. Using structured prompt engineering, two LLMs, DeepSeek V3 and Qwen-plus, were compared on three-class bias risk classification tasks. Three Human-AI collaboration modes were developed: (M1) Evidence Extraction Mode—human judgment based solely on LLM-extracted evidence; (M2) Reasoning Support Mode—combining LLM extracted evidence with reasoning explanations; (M3) Disagreement Trigger Mode—human intervention triggered by model disagreement. Accuracy and human intervention rates were used to evaluate performance and efficiency. RESULTS: We randomly sampled 300 instances per domain from the dataset and evaluated both LLMs using the same structured prompting approach. DeepSeek V3 outperformed Qwen-plus in accuracy in four of seven bias domains, demonstrating superior overall judgment. Model accuracy ranged from 0.403 to 0.777 across domains. In Human-AI collaboration, 60 samples per domain were evaluated with human involvement. M1 showed relatively limited performance in both accuracy and intervention rate; M2 achieved the highest accuracy in most tasks; M3 markedly reduced intervention rates. Accuracy under M2 and M3 ranged from 0.633 to 0.900. Incorporating LLM reasoning improved consistency in human judgments. Disagreement Trigger Mode exhibited high cost-effectiveness in structured and moderate-reasoning tasks, enhancing assessment efficiency, while Reasoning Support Mode was more stable and practical for open-ended and highly subjective tasks. CONCLUSIONS: LLMs (like DeepSeek V3 and Qwen-plus) cannot yet replace human RoB assessment but well-designed Human-AI collaboration can improve accuracy and reduce manual workload. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-025-02763-3.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。