Multi-dimensional dense attention network for pixel-wise segmentation of optic disc in colour fundus images

用于彩色眼底图像视盘像素级分割的多维密集注意力网络

阅读:1

Abstract

BACKGROUND: Segmentation of retinal fragments like blood vessels, Optic Disc (OD), and Optic Cup (OC) enables the early detection of different retinal pathologies like Diabetic Retinopathy (DR), Glaucoma, etc. OBJECTIVE: Accurate segmentation of OD remains challenging due to blurred boundaries, vessel occlusion, and other distractions and limitations. These days, deep learning is rapidly progressing in the segmentation of image pixels, and a number of network models have been proposed for end-to-end image segmentation. However, there are still certain limitations, such as limited ability to represent context, inadequate feature processing, limited receptive field, etc., which lead to the loss of local details and blurred boundaries. METHODS: A multi-dimensional dense attention network, or MDDA-Net, is proposed for pixel-wise segmentation of OD in retinal images in order to address the aforementioned issues and produce more thorough and accurate segmentation results. In order to acquire powerful contexts when faced with limited context representation capabilities, a dense attention block is recommended. A triple-attention (TA) block is introduced in order to better extract the relationship between pixels and obtain more comprehensive information, with the goal of addressing the insufficient feature processing. In the meantime, a multi-scale context fusion (MCF) is suggested for acquiring the multi-scale contexts through context improvement. RESULTS: Specifically, we provide a thorough assessment of the suggested approach on three difficult datasets. In the MESSIDOR and ORIGA data sets, the suggested MDDA-NET approach obtains accuracy levels of 99.28% and 98.95%, respectively. CONCLUSION: The experimental results show that the MDDA-Net can obtain better performance than state-of-the-art deep learning models under the same environmental conditions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。