Accurate detection and grading of pterygium through smartphone by a fusion training model

利用融合训练模型通过智能手机对翼状胬肉进行精确检测和分级

阅读:3

Abstract

BACKGROUND/AIMS: To improve the accuracy of pterygium screening and detection through smartphones, we established a fusion training model by blending a large number of slit-lamp image data with a small proportion of smartphone data. METHOD: Two datasets were used, a slit-lamp image dataset containing 20 987 images and a smartphone-based image dataset containing 1094 images. The RFRC (Faster RCNN based on ResNet101) model for the detection model. The SRU-Net (U-Net based on SE-ResNeXt50) for the segmentation models. The open-cv algorithm measured the width, length and area of pterygium in the cornea. RESULTS: The detection model (trained by slit-lamp images) obtained the mean accuracy of 95.24%. The fusion segmentation model (trained by smartphone and slit-lamp images) achieved a microaverage F(1) score of 0.8981, sensitivity of 0.8709, specificity of 0.9668 and area under the curve (AUC) of 0.9295. Compared with the same group of patients' smartphone and slit-lamp images, the fusion model performance in smartphone-based images (F(1) score of 0.9313, sensitivity of 0.9360, specificity of 0.9613, AUC of 0.9426, accuracy of 92.38%) is close to the model (trained by slit-lamp images) in slit-lamp images (F(1) score of 0.9448, sensitivity of 0.9165, specificity of 0.9689, AUC of 0.9569 and accuracy of 94.29%). CONCLUSION: Our fusion model method got high pterygium detection and grading accuracy in insufficient smartphone data, and its performance is comparable to experienced ophthalmologists and works well in different smartphone brands.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。