Abstract
The rapid growth of short-video platforms has reshaped how individuals access health information, but it has also fueled the spread of misinformation and disinformation. Dry eye, a prevalent ocular surface disorder, provides a representative case for examining these challenges. Reliable and scalable methods are urgently needed to identify and mitigate misinformation risks in online health content. We proposed a framework employing Video Large Language Models (VideoLLMs) for automated evaluation of science popularization videos. Three representative VideoLLMs (VideoLLaMA3, QwenVL, and InternVL) were benchmarked using three established instruments: Patient Education Materials Assessment Tool for Audiovisual Materials (PEMAT-A/V), Global Quality Score (GQS), and Video Information and Quality Index (VIQI). A dataset of 185 Chinese-language videos on dry eye was collected from TikTok and independently annotated by two ophthalmologists. Agreement between VideoLLM-generated scores and expert ratings was quantified using the Intraclass Correlation Coefficient (ICC). Across most metrics, VideoLLMs demonstrated poor agreement with expert annotations (ICC < 0.40), except for the actionability dimension of PEMAT-A/V, where QwenVL and InternVL achieved ICCs of 0.50 and 0.43, respectively, with the experts. This work establishes the first benchmark of VideoLLMs for evaluating ophthalmic science popularization videos and reveals substantial limitations in the performance of current models, with agreement levels falling well short of practical acceptability. Rather than demonstrating readiness for deployment, our open-source framework serves as a reference tool for systematically assessing model behavior, highlighting existing gaps, and motivating further methodological improvements before VideoLLMs can be considered for automated evaluation or governance of medical video content.