Application of multi-scale feature extraction and explainable machine learning in chest x-ray position evaluation within an integrated learning framework

在集成学习框架内,将多尺度特征提取和可解释机器学习应用于胸部X光片位置评估

阅读:2

Abstract

OBJECTIVES: This study presents a novel deep learning-machine learning fusion network for quantitative and interpretable assessment of chest X-ray positioning, aiming to analyze critical factors in patient positioning layout. MATERIALS AND METHODS: In this retrospective study, we analyzed 3300 chest radiographs from a Chinese medical institution, collected between March 2021-December 2022. The dataset was partitioned into the XJ_chest_21 subset for training automated segmentation model and the XJ_chest_22 subset to validate three classification models: Random Forest Fusion Network (RFFN), Threshold Classification (TC), and Multivariate Logistic Regression (MLR). After automatically measuring five positioning indicators in the images, the data were input into the models to assess positioning quality. We compared the performance metrics of the three classification models, including AUC, accuracy, sensitivity, and specificity. SHAP (Shapley Additive Explanations) was utilized to interpret feature importance in the decision-making process of the RFFN model. We evaluated measurement consistency between the Automated Measurement Model (AMM) and radiologists. RESULTS: U-net++ demonstrated significantly superior performance compared to U-net in multi-target segmentation accuracy (mean Dice: 0.926 vs. 0.812). The five positioning metrics showed excellent agreement between AMM and reference standards (r = 0.93). ROC analysis indicated that RFFN performed significantly better in overall image quality classification (AUC, 0.982; 95% CI: 0.963, 0.993) compared to both TC (AUC, 0.959; 95% CI: 0.923, 0.995) and MLR (AUC, 0.953; 95% CI: 0.933, 0.974). CONCLUSION: Our study introduces a novel segmentation-based random forest fusion network that achieves accurate image positioning classification and identifies critical operational factors. Furthermore, the clinical interpretability of the fusion model was enhanced through the application of the SHAP method. KEY POINTS: Question How can AI-driven interpretable methods be utilized to assess patient positioning in chest radiography and enhance radiographers' accuracy? Findings The Random Forest Fusion Network (RFFN) outperformed Threshold Classification (TC) and Multivariate Logistic Regression (MLR) in positioning classification (AUC = 0.98). Clinical relevance An integrated framework that combines deep learning and machine learning achieves accurate image positioning classification, identifies critical operational factors, enables expert-level image quality assessment, and delivers automated feedback to radiographers.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。