Quantifying the Characteristics of Diabetic Retinopathy in Macular Optical Coherence Tomography Angiography Images: A Few-Shot Learning and Explainable Artificial Intelligence Approach

利用少样本学习和可解释人工智能方法量化黄斑光学相干断层扫描血管成像图像中糖尿病视网膜病变的特征

阅读:1

Abstract

BACKGROUND: Early detection and accurate staging of diabetic retinopathy (DR) are important to prevent vision loss. Optical coherence tomography angiography (OCTA) images provide detailed insights into the retinal vasculature, revealing intricate changes that occur as DR progresses. However, interpreting these complex images requires significant expertise and is often time-intensive. Deep learning techniques have the potential to automate DR analysis. However, they typically require large datasets for effective training. To address the challenge of limited data in this emerging imaging field, a combined approach using few-shot learning (FSL) and self-attention mechanisms within explainable AI (XAI) was explored. OBJECTIVE: To investigate and evaluate the potential of an FSL-self-attention XAI approach to improve the accuracy of DR staging classification using OCTA images. METHODS: A total of 206 OCTA images, comprising 104 non-proliferative diabetic retinopathy (NPDR) and 102 proliferative diabetic retinopathy (PDR) cases, were analyzed using the FSL method. Three pre-trained networks (ResNet-50, DenseNet-161, and MobileNet-v2) were employed, with the top-performing model subsequently integrated with the Match-Them-Up Network (MTUNet) to provide explainable interpretations using a self-attention mechanism. The performance of the models was evaluated by applying standard metrics, including accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC). The performance of the MTUNet model is assessed by calculating pattern-matching scores for PDR and NPDR classes. RESULTS: The ResNet-50 pre-trained model in FSL demonstrated the best overall performance, achieving an accuracy of 76.17%, a sensitivity of 81.83%, a specificity of 70.5%, and 0.82 AUC in classifying DR stages. MTUNet provided pattern-matching scores of 0.77 and 0.75 for PDR and NPDR classes, respectively. CONCLUSIONS: FSL and self-attention mechanisms in XAI offer promising approaches for accurate DR stage classification, especially in data-limited scenarios. This could potentially facilitate early DR detection and inform clinical decision-making.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。