Interactive Machine Learning-Based Multi-Label Segmentation of Solid Tumors and Organs

基于交互式机器学习的实体肿瘤和器官多标签分割

阅读:2

Abstract

We seek the development and evaluation of a fast, accurate, and consistent method for general-purpose segmentation, based on interactive machine learning (IML). To validate our method, we identified retrospective cohorts of 20 brain, 50 breast, and 50 lung cancer patients, as well as 20 spleen scans, with corresponding ground truth annotations. Utilizing very brief user training annotations and the adaptive geodesic distance transform, an ensemble of SVMs is trained, providing a patient-specific model applied to the whole image. Two experts segmented each cohort twice with our method and twice manually. The IML method was faster than manual annotation by 53.1% on average. We found significant (p < 0.001) overlap difference for spleen (Dice(IML)/Dice(Manual) = 0.91/0.87), breast tumors (Dice(IML)/Dice(Manual) = 0.84/0.82), and lung nodules (Dice(IML)/Dice(Manual) = 0.78/0.83). For intra-rater consistency, a significant (p = 0.003) difference was found for spleen (Dice(IML)/Dice(Manual) = 0.91/0.89). For inter-rater consistency, significant (p < 0.045) differences were found for spleen (Dice(IML)/Dice(Manual) = 0.91/0.87), breast (Dice(IML)/Dice(Manual) = 0.86/0.81), lung (Dice(IML)/Dice(Manual) = 0.85/0.89), the non-enhancing (Dice(IML)/Dice(Manual) = 0.79/0.67) and the enhancing (Dice(IML)/Dice(Manual) = 0.79/0.84) brain tumor sub-regions, which, in aggregation, favored our method. Quantitative evaluation for speed, spatial overlap, and consistency, reveals the benefits of our proposed method when compared with manual annotation, for several clinically relevant problems. We publicly release our implementation through CaPTk (Cancer Imaging Phenomics Toolkit) and as an MITK plugin.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。