Abstract
The diagnostic potential of multimodal large language models (LLMs) in oral medicine remains underexplored, particularly in real-world clinical contexts. This study introduces Vision-Based Diagnostic Gain (VWDG) as a novel metric to quantify the incremental diagnostic value of incorporating images into AI-assisted diagnosis of oral lesions. We conducted a prospective, biopsy-validated, case-matched study including 200 oral lesion cases with clinical photographs and radiographs of variable quality. ChatGPT-5 and Gemini 2.5 Pro were evaluated against board-certified oral medicine experts. Each case was presented under two conditions: text-only and multimodal (text plus images). Diagnostic accuracy was measured across Top-1, Top-3, and Top-5 differentials. VWDG was defined as the absolute and relative improvement in diagnostic accuracy between multimodal and text-only conditions. Cochran's Q and paired McNemar tests with effect sizes quantified differences across models and conditions, with analyses stratified by lesion type and diagnostic difficulty Both models demonstrated strong baseline diagnostic accuracy, but their performance diverged with image integration. ChatGPT-5 achieved significant VWDG across thresholds-Top-1 gain + 19% points, Top-3 gain + 18 pp, and Top-5 gain + 14 pp (all p < 0.001). In contrast, Gemini 2.5 Pro showed negligible or even negative gain (0 pp at Top-1/Top-3; - 2 pp at Top-5). Stratified analyses confirmed that ChatGPT-5 benefited most from visual input in malignant and diagnostically difficult cases, whereas Gemini's strength remained in text-dominant contexts. Human experts consistently outperformed both models in simple and benign presentations. By introducing and applying VWDG, this study provides the first expert-anchored, head-to-head evaluation of next-generation multimodal LLMs in oral medicine. ChatGPT-5 functions as a visual synergist, Gemini as a textual expert, and their complementary strengths suggest a cooperative human-AI diagnostic paradigm. VWDG offers a clinically meaningful framework for benchmarking AI models and guiding safe, context-aware integration into practice.