Abstract
Large Language Models (LLMs) are increasingly applied in healthcare and are expected to play an active role in clinical practice. However, their effectiveness for clinical note summarization remains underexplored, and systematic comparisons across different models are lacking. This study addresses this gap by benchmarking 16 generative LLMs from major providers, including OpenAI (GPT), DeepSeek, Meta (LLaMA), Google (Gemma), Mistral (Mixtral), and Alibaba (Qwen), using the MIMIC-IV-Note. Both extractive and abstractive summarization approaches were implemented and evaluated with multiple lexical and semantic metrics, including ROUGE, BLEU, METEOR, COMET, and BERTScore. In addition, processing time, cost, and deployment feasibility were assessed to provide a practical perspective for clinical adoption. The results show that Gemma-3-27B achieved the strongest overall performance in extractive summarization. For abstractive summarization, DeepSeek-R1-70B, Qwen-3-32B, and GPT-4o emerged as leading models. Their relative strengths varied depending on whether lexical overlap, semantic adequacy, or fluency was prioritized. Importantly, larger parameter sizes did not always translate into better outcomes, as smaller models such as LLaMa-3-8B and Gemma-2-9B often produced competitive results with faster runtimes and lower computational costs. This study highlights the trade-offs between performance, efficiency, and deployment context that offers practical insights into model selection for clinical note summarization and informing future integration of LLMs into healthcare workflows.