Unsupervised domain adaptation teacher-student network for retinal vessel segmentation via full-resolution refined model

基于全分辨率精细模型的无监督域自适应师生网络视网膜血管分割

阅读:1

Abstract

Retinal blood vessels are the only blood vessels in the human body that can be observed non-invasively. Changes in vessel morphology are closely associated with hypertension, diabetes, cardiovascular disease and other systemic diseases, and computers can help doctors identify these changes by automatically segmenting blood vessels in fundus images. If we train a highly accurate segmentation model on one dataset (source domain) and apply it to another dataset (target domain) with a different data distribution, the segmentation accuracy will drop sharply, which is called the domain shift problem. This paper proposes a novel unsupervised domain adaptation method to address this problem. It uses a teacher-student framework to generate pseudo labels for the target domain image, and trains the student network with a combination of source domain loss and domain adaptation loss; finally, the weights of the teacher network are updated from the exponential moving average of the student network and used for the target domain segmentation. We reconstructed the encoder and decoder of the network into a full-resolution refined model by computing the training loss at multiple semantic levels and multiple label resolutions. We validated our method on two publicly available datasets DRIVE and STARE. From STARE to DRIVE, the accuracy, sensitivity, and specificity are 0.9633, 0.8616,and 0.9733, respectively. From DRIVE to STARE, the accuracy, sensitivity, and specificity are 0.9687, 0.8470, and 0.9785, respectively. Our method outperforms most state-of-the-art unsupervised methods. Compared with domain adaptation methods, our method also has the best F1 score (0.8053) from STARE to DRIVE and a competitive F1 score (0.8001) from DRIVE to STARE.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。