An intelligent MRI data fusion framework for optimized diagnosis of spinal tumors

一种用于优化脊柱肿瘤诊断的智能MRI数据融合框架

阅读:1

Abstract

BACKGROUND: Multi-modal image fusion is essential for combining complementary information from heterogeneous sensors to support downstream vision tasks. However, existing methods often focus on a single objective, limiting their effectiveness in complex real-world scenarios. METHODS: We propose TSJNet, a novel Target and Semantic Joint-driven Network for multi-modality image fusion. The architecture integrates a fusion module with detection and segmentation subnetworks. A Local Significant Feature Extraction (LSFE) module with dual-branch design enhances fine-grained cross-modal feature interaction. RESULTS: TSJNet was evaluated on four public datasets (MSRS, M3FD, RoadScene, and LLVIP), achieving an average improvement of +2.84% in object detection (mAP@0.5) and +7.47% in semantic segmentation (mIoU). The model was benchmarked not only against classical ML methods (e.g., DWT + SVM, LBP + SVM) but also modern deep learning architectures and attention-based fusion models, confirming the superiority and novelty of the proposed SICF framework. A 5-fold cross-validation on MSRS demonstrated consistent performance (78.21 ± 1.02 mAP, 71.45 ± 1.18 mIoU). Model complexity analysis confirmed efficiency in terms of parameters, FLOPs, and inference time. CONCLUSION: TSJNet effectively combines task-aware supervision and modality interaction to produce high-quality fused outputs. Its performance, robustness, and efficiency make it a promising solution for real-world multi-modal imaging applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。