Abstract
OBJECTIVE: Meaningful assessments of how large language models (LLMs) incorporate clinical guidelines require large-scale testing over many queries. Here, we evaluate the prevalence of clinical guideline omissions and hallucinations in a large sample of diagnostic LLM outputs. METHODS: We used simulated case vignettes and zero-shot prompting to generate diagnostic outputs and rationales from GPT-4.1 and DeepSeek-V3. English case vignettes were created for hypercholesterolaemia and type-2 diabetes mellitus. Each vignette contained identical medical information, while sociodemographic characteristics varied in terms of sex, ethnicity and location. We calculated the prevalence of existing and hallucinated clinical guidelines in LLM outputs across disease, LLM and sociodemographic characteristics. RESULTS: We analysed a total of 12 197 LLM outputs, which quantifies three hazard areas: omissions (up to 97% for DeepSeek-V3 and 46% for GPT-4.1), hallucinations (up to 9%) and inconsistencies (guideline citation rate ranging from 0% to 78.39% across sociodemographic vignettes). Omission and hallucination rates were generally similar across vignettes with different sex or ethnicity data, yet were particularly sensitive to patient location. DISCUSSION: This study highlights significant variability in clinical guideline prediction across two different diseases, three different sociodemographic variables and two LLMs, even when the LLMs were instructed by identical prompts, establishing clinical guideline prediction in LLM outputs as a stochastic event. CONCLUSION: The stochastic nature of LLMs creates a unique challenge for evidence generation and clinical deployment. Being able to measure and capture this stochasticity within high-quality research designs will be a prerequisite to advancing the responsible deployment of LLMs in healthcare.