Self-Reflective Chest X-Ray Report Generation with Clinical-Aware Detection and Multilevel Readability

具有临床感知检测和多级可读性的自反思胸部X光片报告生成

阅读:1

Abstract

Clinical documentation demands necessitate automated solutions balancing clinical precision with patient comprehension. This study aims to develop and validate a unified framework that maintains diagnostic accuracy while dynamically adapting medical report complexity to diverse literacy levels, and to establish comprehensive evaluation methodologies for patient-centered medical documentation. We developed a unified framework integrating three innovations: a hybrid detection method combining CheXFusion and Eigen-CAM for clinical finding detection and anatomical localization; an advanced LLaVA-based pipeline synthesizing clinical predictions with anatomical data for contextually rich medical reports; and a self-reflective large language model system dynamically adapting report complexity across reading levels (6th, 11th, and 18th-grade) while preserving clinical integrity. Our methodology introduces novel evaluation using the Mistral-small model assessing report quality through consistency, coverage, and fluency metrics. Validation on MIMIC-CXR and IU X-Ray datasets demonstrated substantial improvements: 19.78% enhancement in classification accuracy (AUROC), 17.29% improvement in mean average precision, 56.88% increase in patient comprehension scores, and 5.26% gain in diagnostic precision. The framework successfully addresses maintaining clinical rigor while enhancing patient accessibility, reducing documentation burden on healthcare providers and improving patient engagement through comprehensible reporting. This work establishes new standards for automated medical documentation that effectively reconcile clinical precision with patient comprehension in healthcare communication.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。