ResTRANS3D hybrid framework for data-efficient 3D medical image segmentation

ResTRANS3D混合框架用于数据高效的3D医学图像分割

阅读:1

Abstract

Deep learning has become an important tool for 3D medical image segmentation, where learning effective representations from limited labeled data remains essential for practical deployment. Here, we present ResTRANS3D, a data-efficient self-supervised hybrid framework that combines a 3D-ResNet encoder with a multi-scale Transformer through a residual interaction mechanism to jointly model local spatial structures and long-range contextual dependencies. A dynamic position learning module generates adaptive positional representations conditioned on multi-scale features, while selective self-attention reduces the computational cost of global attention. The model is pretrained using a dual self-supervised strategy that integrates contrastive learning and image reconstruction. Experiments on multiple public 3D medical image benchmarks show that ResTRANS3D supports effective downstream segmentation, particularly when labeled data are limited. These results highlight the potential of hybrid representation learning to improve data-efficient 3D medical image analysis.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。