Abstract
AIM: This study employs a systematic review and meta-analysis to evaluate the accuracy and sensitivity (i.e., correct identification rate of high-risk patients at levels 1–2) of ChatGPT in adult emergency triage and to compare the potential of its different versions. METHODS: A systematic search of English and Chinese medical databases (2022–2025) was performed to identify studies directly comparing the triage performance of ChatGPT with that of human experts or a gold standard. The included studies comprised both simulated case scenarios and real-world clinical data from emergency departments. Literature screening adhered to the PICO framework. Methodological quality was evaluated using the QUADAS-2 tool. Data on overall accuracy and sensitivity were extracted. Meta-analysis was conducted in RevMan 5.4, with a predefined subgroup analysis based on the ChatGPT version. RESULTS: A total of 15 studies were included. The pooled accuracy was 0.51 (95% CI: 0.48–0.55) for ChatGPT-3.5, 0.70 (95% CI: 0.58–0.81) for ChatGPT-4 and its variants, and 0.81 for the optimized versions of ChatGPT (with the 95% confidence interval upper bound approaching 1.0 due to high heterogeneity). No significant differences were observed between ChatGPT and human triage in overall accuracy or sensitivity overall. Subgroup analyses showed that ChatGPT-3.5 performed similarly to humans on both metrics. In contrast, ChatGPT-4 and its variants demonstrated higher accuracy (RR = 1.04, P < 0.001) and sensitivity (RR = 1.46, P = 0.04) compared to human triage, but these advantages were not robust-sensitivity analyses revealed that the statistical significance depended on the inclusion or exclusion of specific studies. CONCLUSION: ChatGPT, particularly ChatGPT-4 and its variants, shows promising capability in identifying high-risk cases during triage, and improvements in version and model optimization are associated with enhanced performance. Nevertheless, this finding is not robust, and the evidence supporting the superiority of ChatGPT-4 remains fragile. Consequently, clinical application should be approached with caution and tailored to specific contexts, with ChatGPT serving as an auxiliary tool.