Abstract
BACKGROUND: Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology-Head and Neck Surgery systematic literature reviews. METHODS: Three PRISMA-based systematic reviews (Jabbour et al. 2017, Wong et al. 2018, and Wu et al. 2021) were replicated using ChatGPTv3.5 and Bard. Outputs (author, title, publication year, and journal) were compared to the original references and cross-referenced with medical databases for authenticity and recall. RESULTS: Several themes emerged when comparing Bard and ChatGPT across the three reviews. Bard generated more outputs and had greater recall in Wong et al.'s review, with a broader date range in Jabbour et al.'s review. In Wu et al.'s review, ChatGPT-2 had higher recall and identified more authentic outputs than Bard-2. CONCLUSION: Large language models (LLMs) failed to fully replicate peer-reviewed methodologies, producing outputs with inaccuracies but identifying relevant, especially recent, articles missed by the references. While human-led PRISMA-based reviews remain the gold standard, refining LLMs for literature reviews shows potential.