Abstract
INTRODUCTION: Systematic literature reviews (SLRs) of randomized clinical trials (RCTs) underpin evidence-based medicine but can be limited by the intensive resource demands of data extraction. Recent advances in accessible large-language models (LLMs) hold promise for automating this step, however testing is limited across different outcomes and disease areas. METHODS: This study developed prompt engineering strategies for GPT-4o to extract data from RCTs across three disease areas: non-small cell lung cancer, endometrial cancer and hypertrophic cardiomyopathy. Prompts were iteratively refined during the development phase, then tested on unseen data. Performance was evaluated via comparison to human extraction of the same data, using F1 scores, precision, recall and percentage accuracy. RESULTS: The LLM was highly effective for extracting study and baseline characteristics, often equaling human performance, with test F1 scores exceeding 0.85. Complex efficacy and adverse event data proved more challenging, with test F1 scores ranging from 0.22 to 0.50. Transferability of prompts across disease areas was promising but varied, highlighting the need for disease-specific refinement. CONCLUSION: Our findings demonstrate the potential of LLMs, guided by rigorous prompt engineering, to augment the SLR process. However, human oversight remains essential, particularly for complex and nuanced data. As these technologies evolve, continued validation of AI tools will be necessary to ensure accuracy and reliability, and safeguarding of the quality of evidence synthesis.