MILU: a consensus ensemble benchmark for multimodal medical imaging lecture understanding

MILU:用于多模态医学影像讲座理解的共识集成基准

阅读:1

Abstract

PURPOSE: Vision-language models (VLMs) are increasingly used to interpret multimodal educational materials, yet their reliability on diagram-, equation-, and text-dense scientific lecture slides remains poorly understood. This work introduces Medical Imaging Lecture Understanding (MILU), a large-scale benchmark designed to characterize cross-model variability in structured understanding of real medical imaging lectures. APPROACH: MILU includes 23 lecture sets with 1117 slides. LLaVA-OneVision, InternVL3-14B, Qwen2-VL-7B, and Qwen3-VL-4B were evaluated using unified prompts to generate structured JSON. We assessed parsing coverage, pairwise agreement, lecture-level patterns, and how outputs aligned with a simple consensus ensemble to identify shared concepts and relations across slides and models effectively. RESULTS: All models produced valid JSON for most slides (92% to 99% coverage), but semantic agreement was extremely low. Pairwise concept Jaccard indices ranged from 0.03 to 0.09, and triple-level F1 scores from 0.001 to 0.033. Lecture-level patterns revealed higher stability in mathematically structured lectures and lower stability in diagram-heavy content. The consensus ensemble showed modest alignment with individual models (concept Jaccard 0.056 to 0.179; triple F1 0.014 to 0.044), exposing areas of consistent convergence while also highlighting systematic disagreement. CONCLUSIONS: MILU provides the first comprehensive benchmark for evaluating structured understanding of scientific lecture slides. The results show that current VLMs achieve high formatting reliability but low semantic consistency. MILU establishes a foundation for future expert-annotated benchmarks, diagram- and math-aware modeling, and improved methods for scientific lecture interpretation.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。