Fine-tuned lightweight language models for structured extraction of liver cancer imaging free-text report: a comparative analysis with existing large language models

用于肝癌影像自由文本报告结构化提取的微调轻量级语言模型:与现有大型语言模型的比较分析

阅读:1

Abstract

BACKGROUND: Organizing free-text patient data into a structured format is labor-intensive and time-consuming. This study aims to evaluate the effectiveness of a fine-tuned lightweight language model in structuring liver cancer imaging reports. METHODS: A retrospective dataset of 2,780 liver imaging reports from Sun Yat-sen University Cancer Center (2012–2022), including cases of primary liver cancer and benign liver disease, was collected. Three key entries—Number of Malignant Tumors (NMT), Diameter of the Largest Tumor (DLT), and Vascular Invasion (VI)—were annotated by three radiologists and subsequently reviewed and calibrated by a senior oncologist to ensure data reliability. The annotated dataset was randomly split into training, validation, and test sets at a ratio of 7:1:2. A T5-based lightweight model with 250 M parameters (Liver-T5) was fine-tuned using these data. Performance was evaluated using Accuracy and Macro-F1 metrics. Comparative analysis with LLMs such as ChatGLM4, Qianwen2.0, and Llama3.1 was conducted. RESULTS: The fine-tuned Liver-T5 model outperformed larger LLMs in Exact Match (EM) rate and key evaluation metrics, achieving an EM of 0.8907 and high accuracy for NMT (0.9355) and VI (0.9910). Specifically, for NMT extraction, Liver-T5 achieved an accuracy of 0.9355, outperforming large models such as Qianwen72B (accuracy 0.9140), LLaMA3 (accuracy 0.8961), and ChatGLM4 (accuracy 0.8226). In the VI extraction, Liver-T5 achieved the highest accuracy of 0.9910, significantly surpassing other models, with Qianwen72B, LLaMA3, and ChatGLM4 achieving accuracies of 0.9606, 0.9462, and 0.7581, respectively. A higher proportion of schema-nonconforming outputs was observed in large general-purpose models (e.g., LLaMA3), while Liver-T5 more consistently generated schema-compliant predictions. CONCLUSIONS: The fine-tuned lightweight language model demonstrates superior accuracy and efficiency in structuring liver cancer imaging reports compared to larger LLMs. This capability addresses critical challenges in clinical workflows by converting unstructured data into structured formats.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。