MHAU-Net: a multi-scale hybrid attention U-shaped network for the segmentation of MRI breast tumors

MHAU-Net:一种用于MRI乳腺肿瘤分割的多尺度混合注意力U型网络

阅读:2

Abstract

BACKGROUND: Breast tumor segmentation is a critical aspect of magnetic resonance imaging (MRI)-based breast disease diagnosis. Numerous networks and algorithms, including U-Net and its enhancements, have been proposed for breast tumor segmentation. However, existing methods have certain shortcomings and limitations, including insufficient extraction of multi-scale contextual information, which poses challenges in adapting to tumors of different sizes and distinguishing tumor boundaries from surrounding tissues. Additionally, the feature extraction process lacks specificity and is prone to interference from irrelevant information outside the tumor region. This study aimed to address these challenges, and achieve the accurate and automated segmentation of breast tumors in MRI scans. METHODS: A new three-dimensional (3D) breast tumor segmentation network named the multi-scale hybrid attention U-shaped network (MHAU-Net) was designed. The network used four sets of atrous convolutions with different dilated ratios to extract multi-scale context information. Global pooling and single-channel convolution structures were employed to construct channel and spatial blocks. Subsequently, the network integrated four sets of atrous convolutions with spatial and channel attention blocks to extract hybrid attention features. Compared to existing MRI segmentation networks for breast tumors, the MHAU-Net demonstrated superior performance in extracting informative features and adapting to tumors of diverse sizes and shapes. RESULTS: To evaluate the proposed approach, we curated a large-scale breast MRI dataset comprising 906 3D images. A comparative analysis with seven commonly used segmentation networks revealed the superior performance of our method. Our network had a dice similarity coefficient (DSC) and intersection over union (IoU) of 84.1%±2.1% and 74.2%±3.4%, respectively, representing a 6.0% and 7.1% improvement over the baseline 3D U-Net. Additionally, our method had DSC values of 85.7%±1.6%, 84.3%±2.8%, 86.7%±1.7%, and 86.3%±1.5% for single, small, large, and mass tumors, respectively. CONCLUSIONS: Our results highlight the superior overall performance of the proposed method, and show its ability to adapt to various types of tumor images. This study establishes a solid foundation for further exploring the application value of deep learning in breast cancer diagnosis.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。