Abstract
Surgical image captioning is critical for automated reporting and education but is currently limited by a lack of long-text datasets and the tendency of generic Multimodal Large Language Models (MLLMs) to hallucinate medical details. To address this, we present a comprehensive framework for long-text surgical captioning. First, we construct a verified long-text benchmark extending the EndoVis2018 dataset, utilizing an automated pipeline with expert-in-the-loop validation to transform brief triplets into rich narratives. Second, we investigate domain-specific adaptation strategies for MLLMs. We implement a surgical concept retrieval-augmented generation (RAG) mechanism that dynamically injects specialized knowledge (instruments, actions) into the visual encoder, effectively mitigating domain-specific hallucinations common in generic models. Finally, recognizing the inadequacy of n-gram metrics for long medical text, we establish a robust evaluation protocol using clinically-aligned metrics. Extensive experiments demonstrate that our data-centric and retrieval-enhanced approach significantly outperforms baselines in producing clinically accurate, coherent long descriptions.