Visual recognition limitations in multimodal large language models: A comparative analysis of histological image interpretation

多模态大型语言模型中视觉识别的局限性:组织学图像解释的比较分析

阅读:1

Abstract

Multimodal large language models (LLMs) with image recognition capabilities have emerged as potential tools for medical image analysis, yet their performance in specialized domains like histology remains largely unexplored. The objective of this study was to systematically evaluate the performance of leading multimodal LLMs in histological image interpretation and assess their visual recognition capabilities. Four multimodal LLMs (GPT-4o, Claude Sonnet 4, Gemini 2.5 Flash, and Copilot) were evaluated using 144 histological images representing four tissue types (epithelial, connective, muscle, and nervous) at three magnification levels. Each image was assessed using three standardized questions: tissue identification, morphological features, and functional analysis. Three expert faculty members independently graded responses using a 4-point scale (1 = Poor to 4 = Excellent). Friedman tests, ICC, and post-hoc power analyses were performed with statistical significance set at p < .05. A clear performance hierarchy emerged with Gemini demonstrating superior performance (mean score: 3.35/4.00), significantly outperforming all other models. Copilot and GPT-4o tied for second place (both 2.76/4.00), while Claude showed the lowest performance (2.55/4.00). Performance varied across tissue types, with epithelial tissue showing the greatest inter-model variation. Inter-rater reliability was good across all models (ICC > 0.85), confirming assessment consistency. Post-hoc power analysis validated statistical significance for primary comparisons but indicated insufficient power to distinguish between the three lower-performing models. Current multimodal LLMs exhibit significant limitations in visual recognition relative to text processing performance. The substantial cross-modal performance gaps reveal some constraints in visual processing architectures, though the underlying mechanisms require further investigation. These findings establish technical benchmarks for multimodal LLM development and highlight the need for specialized visual processing innovations in their imaging processes.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。