[An unsupervised three-dimensional medical image registration method based on shifted window Transformer and convolutional neural network]

[一种基于移位窗口变换器和卷积神经网络的无监督三维医学图像配准方法]

阅读:1

Abstract

Three-dimensional (3D) deformable image registration plays a critical role in 3D medical image processing. This technique aligns images from different time points, modalities, or individuals in 3D space, enabling the comparison and fusion of anatomical or functional information. To simultaneously capture the local details of anatomical structures and the long-range dependencies in 3D medical images, while reducing the high costs of manual annotations, this paper proposes an unsupervised 3D medical image registration method based on shifted window Transformer and convolutional neural network (CNN), termed Swin Transformer-CNN-hybrid network (STCHnet). In the encoder part, STCHnet uses Swin Transformer and CNN to extract global and local features from 3D images, respectively, and optimizes feature representation through feature fusion. In the decoder part, STCHnet utilizes Swin Transformer to integrate information globally, and CNN to refine local details, reducing the complexity of the deformation field while maintaining registration accuracy. Experiments on the information extraction from images (IXI) and open access series of imaging studies (OASIS) datasets, along with qualitative and quantitative comparisons with existing registration methods, demonstrate that the proposed STCHnet outperforms baseline methods in terms of Dice similarity coefficient (DSC) and standard deviation of the log-Jacobian determinant (SDlogJ), achieving improved 3D medical image registration performance under unsupervised conditions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。