Multimodal data deep learning method for predicting symptomatic pneumonitis caused by lung cancer radiotherapy combined with immunotherapy

利用多模态数据深度学习方法预测肺癌放疗联合免疫疗法引起的症状性肺炎

阅读:1

Abstract

OBJECTIVES: The pairing of immunotherapy and radiotherapy in the treatment of locally advanced nonsmall cell lung cancer (NSCLC) has shown promise. By combining radiotherapy with immunotherapy, the synergistic effects of these modalities not only bolster antitumor efficacy but also exacerbate lung injury. Consequently, developing a model capable of accurately predicting radiotherapy- and immunotherapy-related pneumonitis in lung cancer patients is a pressing need. Depth image features extracted from deep learning, combined with radiomics and clinical characteristics, were used to create a deep learning model. This model was developed to forecast symptomatic pneumonitis (SP) (≥Grade 2) in lung cancer patients undergoing thoracic radiotherapy in combination with immunotherapy. METHODS: The prediction was based on CT scans taken prior to the start of thoracic radiotherapy. Retrospective collection of clinical data was conducted on 261 lung cancer patients undergoing a combination of thoracic radiotherapy and immunotherapy from January 2018 to May 2023. Imaging data in the form of pre-RT-CT scans were obtained for all individuals included in the study. The region of interest (ROI) in the lung parenchyma was outlined separately from the tumor volume, and standard radiomic features were obtained through the use of 3D Slicer software. In addition, the images were cropped to a uniform size of 224x224 pixels. Data augmentation techniques, including random horizontal flipping, were employed. The normalized image data was then input into a pre-trained deep residual network, ResNet34, which utilized convolutional layers and global average pooling layers for deep feature extraction. A five-fold cross-validation approach was implemented to construct the model, automatically splitting the dataset into training and validation sets at an 8:2 ratio. This process was repeated five times, and the results from these iterations were aggregated to compute the average values of performance metrics, thereby assessing the overall performance and stability of the model. RESULTS: The multimodal fusion model developed in this research, which incorporated depth image characteristics, radiomics properties, and clinical data, demonstrated an AUC of 0.922 (95% CI: 0.902-0.945, P value < 0.001). This amalgamated model surpassed the performance of the radiomic feature model (AUC 0.811, 95% CI: 0.786-0.832, P value < 0.001), the clinical information model (AUC 0.711, 95% CI: 0.682-0.753, P value < 0.001), as well as the model that integrated omics attributes with clinical data (AUC 0.872, 95% CI: 0.845-0.896, P value < 0.001) utilizing deep neural networks (DNNs). Comparatively, the radiomic feature model based on random forest (RF) yielded an AUC of 0.576, with a 95% confidence interval of 0.523-0.628. The clinical information model based on RF had an AUC of 0.525, with a 95% confidence interval of 0.479-0.572. When both radiomic features and clinical information were combined in a model based on RF, the AUC improved slightly to 0.611, with a 95% confidence interval of 0.566-0.652. CONCLUSIONS: In this study, a deep neural network-based multimodal fusion model improved the prediction performance compared to traditional radiomics. The model accurately predicted Grade 2 or higher SP in lung cancer patients undergoing radiotherapy combined with immunotherapy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。