Abstract
In the cross-modal medical image segmentation method, it is easy to ignore the dependence between spatial features and frequency features, and fine-grained frequency features are not fused effectively. To solve the above problems, this paper proposes a cross-modal segmentation network DBW-Net. The main innovation work are as follows: Firstly, a cross-modal dual-domain bi-direction feature interaction segmentation network DBW-Net is designed. there are 3 encoders and 1 decoder, the 3 encoders are used to extract the features of PET/CT, PET and CT respectively. Secondly, a Cross-Modal Feature Extractor "from frequency to spatial " (CMFE(F-> S)) is designed in the encoder. The module converts the spatial map into multiple spectral maps by 2D Discrete Cosine Transform (2D DCT). Multi-frequency cross-dimension attention is used to capture the correlation among multiple spectral maps feature in different dimension. so as to generate a refined frequency attention map. This module uses the refined frequency attention map to enhance modal feature and fuse cross-modal interaction, and completes the recalibration about input feature map. Thirdly, a Cross-modal Feature Coupler "from spatial to frequency" (CMFC(S->F)) is designed in the bottleneck layer. The module maps multi-modal information to the spatial and frequency domain through the spatial-frequency feature extractor, Cross-domain coupled attention is used to fuse the semantic gap between multi-modal fine-grained frequency features and spatial features. Finally, in order to verify the effectiveness of the proposed method, experiments are carried out on the clinical multi-modal lung tumor medical image dataset and the Brats2019 brain tumor public dataset. The experimental results show that for lung tumor segmentation, the Miou, Dice, Voe, Rvd and Recall are increased by 3.02%, 2.32%, 4.66%, 2.63% and 4.16%, respectively. For brain tumor segmentation, the Miou, Dice, Voe, Rvd, Recall are increased by 3.06%, 2.31%, 4.68%, 2.64%, 5.76%, respectively. It shows that the model for complex shape lesion segmentation, has high precision and relatively low redundancy. It significantly improves the segmentation accuracy and robustness of the lesion area, and provides technical support for accurate identification and diagnosis of early lesions.