Crowdsourcing to Evaluate Fundus Photographs for the Presence of Glaucoma

利用众包方式评估眼底照片以诊断青光眼

阅读:1

Abstract

PURPOSE: To assess the accuracy of crowdsourcing for grading optic nerve images for glaucoma using Amazon Mechanical Turk before and after training modules. MATERIALS AND METHODS: Images (n=60) from 2 large population studies were graded for glaucoma status and vertical cup-to-disc ratio (VCDR). In the baseline trial, users on Amazon Mechanical Turk (Turkers) graded fundus photos for glaucoma and VCDR after reviewing annotated example images. In 2 additional trials, Turkers viewed a 26-slide PowerPoint training or a 10-minute video training and passed a quiz before being permitted to grade the same 60 images. Each image was graded by 10 unique Turkers in all trials. The mode of Turker grades for each image was compared with an adjudicated expert grade to determine accuracy as well as the sensitivity and specificity of Turker grading. RESULTS: In the baseline study, 50% of the images were graded correctly for glaucoma status and the area under the receiver operating characteristic (AUROC) was 0.75 [95% confidence interval (CI), 0.64-0.87]. Post-PowerPoint training, 66.7% of the images were graded correctly with AUROC of 0.86 (95% CI, 0.78-0.95). Finally, Turker grading accuracy was 63.3% with AUROC of 0.89 (95% CI, 0.83-0.96) after video training. Overall, Turker VCDR grades for each image correlated with expert VCDR grades (Bland-Altman plot mean difference=-0.02). CONCLUSIONS: Turkers graded 60 fundus images quickly and at low cost, with grading accuracy, sensitivity, and specificity, all improving with brief training. With effective education, crowdsourcing may be an efficient tool to aid in the identification of glaucomatous changes in retinal images.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。