Modality-projection universal model for comprehensive full-body medical imaging segmentation

用于全面全身医学影像分割的模态投影通用模型

阅读:1

Abstract

The integration of deep learning in medical imaging has significantly advanced diagnostic, therapeutic, and research outcomes. However, applying universal models across multiple modalities remains challenging due to inherent inter-modality variability. Here we present the Modality Projection Universal Model (MPUM), trained on 861 subjects, which dynamically adapts to diverse imaging modalities through a modality-projection strategy. MPUM achieves state-of-the-art, whole-body organ segmentation, providing rapid localization for computer-aided diagnosis and precise anatomical quantification to support clinical decision-making. A controller-based convolutional layer further enables saliency map visualization, enhancing model interpretability for clinical use. Beyond segmentation, MPUM reveals metabolic correlations along the brain-body axis and between distinct brain regions, providing insights into systemic and physiological interactions from a whole-body perspective. Here we show that this universal framework accelerates diagnosis, facilitates large-scale imaging analysis, and bridges anatomical and metabolic information, enabling discovery of cross-organ disease mechanisms and advancing integrative brain-body research.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。