Benchmarking Large Language Models for MIMIC-IV Clinical Note Summarization

对用于 MIMIC-IV 临床笔记摘要的大型语言模型进行基准测试

阅读:1

Abstract

Large Language Models (LLMs) are increasingly applied in healthcare and are expected to play an active role in clinical practice. However, their effectiveness for clinical note summarization remains underexplored, and systematic comparisons across different models are lacking. This study addresses this gap by benchmarking 16 generative LLMs from major providers, including OpenAI (GPT), DeepSeek, Meta (LLaMA), Google (Gemma), Mistral (Mixtral), and Alibaba (Qwen), using the MIMIC-IV-Note. Both extractive and abstractive summarization approaches were implemented and evaluated with multiple lexical and semantic metrics, including ROUGE, BLEU, METEOR, COMET, and BERTScore. In addition, processing time, cost, and deployment feasibility were assessed to provide a practical perspective for clinical adoption. The results show that Gemma-3-27B achieved the strongest overall performance in extractive summarization. For abstractive summarization, DeepSeek-R1-70B, Qwen-3-32B, and GPT-4o emerged as leading models. Their relative strengths varied depending on whether lexical overlap, semantic adequacy, or fluency was prioritized. Importantly, larger parameter sizes did not always translate into better outcomes, as smaller models such as LLaMa-3-8B and Gemma-2-9B often produced competitive results with faster runtimes and lower computational costs. This study highlights the trade-offs between performance, efficiency, and deployment context that offers practical insights into model selection for clinical note summarization and informing future integration of LLMs into healthcare workflows.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。