Enhanced lung cancer subtype classification using attention-integrated DeepCNN and radiomic features from CT images: a focus on feature reproducibility

利用注意力机制集成的深度卷积神经网络和CT图像放射组学特征增强肺癌亚型分类:重点关注特征可重复性

阅读:1

Abstract

OBJECTIVE: This study aims to assess a hybrid framework that combines radiomic features with deep learning and attention mechanisms to improve the accuracy of classifying lung cancer subtypes using CT images. MATERIALS AND METHODS: A dataset of 2725 lung cancer images was used, covering various subtypes: adenocarcinoma (552 images), SCC (380 images), small cell lung cancer (SCLC) (307 images), large cell carcinoma (215 images), and pulmonary carcinoid tumors (180 images). The images were extracted as 2D slices from 3D CT scans, with tumor-containing slices selected from scans obtained across five healthcare centers. The number of slices per patient varied between 7 and 30, depending on tumor visibility. CT images were preprocessed using standardization, cropping, and Gaussian smoothing to ensure consistency across scans from different imaging instruments used at the centers. Radiomic features, including first-order statistics (FOS), shape-based, and texture-based features, were extracted using the PyRadiomics library. A DeepCNN architecture, integrated with attention mechanisms in the second convolutional block, was used for deep feature extraction, focusing on diagnostically important regions. The dataset was split into training (60%), validation (20%), and testing (20%) sets. Various feature selection techniques, such as Non-negative Matrix Factorization (NMF) and Recursive Feature Elimination (RFE), were used, and multiple machines learning models, including XGBoost and Stacking, were evaluated using accuracy, sensitivity, and AUC metrics. The model's reproducibility was validated using ICC analysis across different imaging conditions. RESULTS: The hybrid model, which integrates DeepCNN with attention mechanisms, outperformed traditional methods. It achieved a testing accuracy of 92.47%, an AUC of 93.99%, and a sensitivity of 92.11%. XGBoost with NMF showed the best performance across all models, and the combination of radiomic and deep features improved classification further. Attention mechanisms played a key role in enhancing model performance by focusing on relevant tumor areas, reducing misclassification from irrelevant features. This also improved the performance of the 3D Autoencoder, boosting the AUC to 93.89% and accuracy to 93.24%. CONCLUSIONS: This study shows that combining radiomic features with deep learning-especially when enhanced by attention mechanisms-creates a powerful and accurate framework for classifying lung cancer subtypes. Clinical trial number Not applicable.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。