Abstract
BACKGROUND: Multimodal large language models (MLLMs) capable of integrating visual and textual information represent a promising advancement for clinical applications requiring image interpretation. Wound care assessment, which demands simultaneous analysis of wound photographs and clinical data, provides an ideal domain to evaluate multimodal vs unimodal artificial intelligence capabilities against human expertise. OBJECTIVE: This study aims to compare the performance of MLLMs, unimodal ChatGPT 5.0, and human clinical experts on a standardized wound care certification examination. METHODS: This cross-sectional comparative study evaluated 3 participant groups on a 25-question wound care certification examination spanning 4 clinical domains (Diagnosis, Treatment, Complication Management, and Wound Subtype Knowledge). Participants included 3 MLLMs (Med-PaLM 2, LLaVA-Med, and BioGPT), 1 unimodal large language model (ChatGPT 5.0), and 4 human clinical experts (general surgeon, wound care nurse, and 2 internal medicine physicians). Statistical analyses included one-way ANOVA with Tukey post hoc tests and domain-specific Kruskal-Wallis comparisons. RESULTS: Human experts achieved the highest accuracy (mean 86%, SD 9.1%), followed by MLLMs (mean 78.7%, SD 12.2%), while ChatGPT 5.0 achieved 64% accuracy, failing the 70% certification threshold. Significant overall group differences were observed (F2,5=8.42, P=.02, η²=0.74). MLLMs significantly outperformed ChatGPT 5.0 (difference=14.7 percentage points, P=.03, Cohen d=1.38), with the multimodal advantage most pronounced in visually dependent domains: Diagnosis (81% vs 43%, P=.008) and Complication Management (72% vs 50%, P=.03). No multimodal advantage was observed for text-based Wound Subtype Knowledge (both 67%). Med-PaLM 2 achieved 92% accuracy, matching that of the wound care nurse, while the general surgeon achieved the highest overall performance (96%). CONCLUSIONS: MLLMs demonstrate significant performance advantages over unimodal artificial intelligence in wound care assessment, particularly for visually dependent clinical tasks. While human experts with specialized wound care experience maintain overall superiority, the point estimate of the top-performing MLLM (Med-PaLM 2, 92%) fell within the observed range of human scores; however, the underpowered comparison (power=0.52) and wide CIs preclude definitive conclusions regarding noninferiority or equivalence to human experts. These findings support the potential role of MLLMs as clinical decision-support tools, warranting further adequately powered validation studies.