[Incomplete multimodal bone tumor image classification based on feature decoupling and fusion]

[基于特征解耦与融合的不完全多模态骨肿瘤图像分类]

阅读:1

Abstract

OBJECTIVES: To construct a bone tumor classification model based on feature decoupling and fusion for processing modality loss and fusing multimodal information to improve classification accuracy. METHODS: A decoupling completion module was designed to extract local and global bone tumor image features from available modalities. These features were then decomposed into shared and modality-specific features, which were used to complete the missing modality features, thereby reducing completion bias caused by modality differences. To address the challenge of modality differences that hinder multimodal information fusion, a cross-attention-based fusion module was introduced to enhance the model's ability to learn cross-modal information and fully integrate specific features, thereby improving the accuracy of bone tumor classification. RESULTS: The experiment was conducted using a bone tumor dataset collected from the Third Affiliated Hospital of Southern Medical University for training and testing. Among the 7 available modality combinations, the proposed method achieved an average AUC, accuracy, and specificity of 0.766, 0.621, and 0.793, respectively, which represent improvements of 2.6%, 3.5%, and 1.7% over existing methods for handling missing modalities. The best performance was observed when all the modalities were available, resulting in an AUC of 0.837, which still reached 0.826 even with MRI alone. CONCLUSIONS: The proposed method can effectively handle missing modalities and successfully integrate multimodal information, and show robust performance in bone tumor classification under various complex missing modality scenarios.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。