Abstract
BACKGROUND: Patient-facing large language model (LLM) outputs for inflammatory bowel disease (IBD) must be decision-relevant, readable, and verifiable. METHODS: In a cross-sectional benchmark using a guideline-derived question set, five publicly available LLMs provided answers to 20 single-intent patient IBD questions, mapped to prespecified decision-critical domains across the care pathway (100 model-question responses). Queries were conducted from January 17-24, 2026, via official web interfaces under default settings (privacy mode; new chat per prompt). Two blinded raters evaluated informational quality and completeness (using DISCERN, EQIP, and the Global Quality Scale), transparency proxies (based on JAMA benchmark criteria), and readability through the Automated Readability Index, Flesch Reading Ease, Gunning Fog Index, Flesch-Kincaid Grade Level, Coleman-Liau Index, and SMOG. Overall differences were assessed using within-question paired Friedman tests with Holm adjustment, and effect size was quantified with Kendall's W. RESULTS: Interrater agreement was high [DISCERN ICC(A,1) = 0.842; EQIP ICC(A,1) = 0.760; GQS weighted κ = 0.812; JAMA weighted κ = 0.936]. Median DISCERN scores ranged from 43.5 to 57.5, and EQIP scores ranged from 67.5 to 77.5, while transparency remained limited (JAMA median 0-1/4). Readability consistently failed to meet patient targets, with grade-level indices exceeding sixth grade and Flesch Reading Ease medians ranging from 15 to 36 (compared to a target of ≥80 for "easy" readability). All 10 outcomes varied significantly across models (Holm-adjusted P < 0.001; W = 0.238-0.702). CONCLUSION: Under default settings, publicly available LLMs exhibit variable informational quality for IBD but consistently poor transparency and readability. Patient-facing deployment should mandate provenance, currency, and disclosure fields, as well as outputs targeted to appropriate grade levels.