PAM: a propagation-based model for segmenting any 3D objects across multi-modal medical images

PAM:一种基于传播的模型,用于分割多模态医学图像中的任意3D物体

阅读:1

Abstract

Volumetric segmentation is a major challenge in medical imaging, as current methods require extensive annotations and retraining, limiting transferability across objects. We present PAM, a propagation-based framework that generates 3D segmentations from a minimal 2D prompt. PAM integrates a CNN-based UNet for intra-slice features with Transformer attention for inter-slice propagation, capturing structural and semantic continuity to enable robust cross-object generalization. Across 44 diverse datasets, PAM outperformed MedSAM and SegVol, improving average DSC by 19.3%. It maintained stable performance under variations in prompts (P ≥ 0.5985) and propagation settings (P ≥ 0.6131), while achieving faster inference (P < 0.001) and reducing user interaction time by 63.6%. Gains were strongest for irregular objects, with improvements negatively correlated with object regularity (r < -0.1249). By delivering accurate 3D segmentations from minimal input, PAM lowers reliance on manual annotation and task-specific training, providing an efficient and generalizable tool for automated clinical imaging.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。