Abstract
BACKGROUND: Risk of bias (RoB) assessment is essential in systematic reviews and clinical guideline development. Current manual processes are complex, inefficient, and inconsistent. Large language models (LLMs) have potential to assist RoB assessment, but their standalone performance is limited. Efficient integration of LLMs with human judgment remains a challenge. METHODS: A high-quality dataset of medical literature was developed, encompassing seven bias domains as defined by the Cochrane RoB 1.0 tool. Using structured prompt engineering, two LLMs, DeepSeek V3 and Qwen-plus, were compared on three-class bias risk classification tasks. Three Human-AI collaboration modes were developed: (M1) Evidence Extraction Mode—human judgment based solely on LLM-extracted evidence; (M2) Reasoning Support Mode—combining LLM extracted evidence with reasoning explanations; (M3) Disagreement Trigger Mode—human intervention triggered by model disagreement. Accuracy and human intervention rates were used to evaluate performance and efficiency. RESULTS: We randomly sampled 300 instances per domain from the dataset and evaluated both LLMs using the same structured prompting approach. DeepSeek V3 outperformed Qwen-plus in accuracy in four of seven bias domains, demonstrating superior overall judgment. Model accuracy ranged from 0.403 to 0.777 across domains. In Human-AI collaboration, 60 samples per domain were evaluated with human involvement. M1 showed relatively limited performance in both accuracy and intervention rate; M2 achieved the highest accuracy in most tasks; M3 markedly reduced intervention rates. Accuracy under M2 and M3 ranged from 0.633 to 0.900. Incorporating LLM reasoning improved consistency in human judgments. Disagreement Trigger Mode exhibited high cost-effectiveness in structured and moderate-reasoning tasks, enhancing assessment efficiency, while Reasoning Support Mode was more stable and practical for open-ended and highly subjective tasks. CONCLUSIONS: LLMs (like DeepSeek V3 and Qwen-plus) cannot yet replace human RoB assessment but well-designed Human-AI collaboration can improve accuracy and reduce manual workload. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-025-02763-3.