Evaluation of the precision and accuracy in the classification of breast histopathology images using the MobileNetV3 model

使用 MobileNetV3 模型评估乳腺组织病理图像分类的精确度和准确度

阅读:2

Abstract

Accurate surgical pathological assessment of breast biopsies is essential to the proper management of breast lesions. Identifying histological features, such as nuclear pleomorphism, increased mitotic activity, cellular atypia, patterns of architectural disruption, as well as invasion through basement membranes into surrounding stroma and normal structures, including invasion of vascular and lymphatic spaces, help to classify lesions as malignant. This visual assessment is repeated on numerous slides taken at various sections through the resected tumor, each at different magnifications. Computer vision models have been proposed to assist human pathologists in classification tasks such as these. Using MobileNetV3, a convolutional architecture designed to achieve high accuracy with a compact parameter footprint, we attempted to classify breast cancer images in the BreakHis_v1 breast pathology dataset to determine the performance of this model out-of-the-box. Using transfer learning to take advantage of ImageNet embeddings without special feature extraction, we were able to correctly classify histopathology images broadly as benign or malignant with 0.98 precision, 0.97 recall, and an F1 score of 0.98. The ability to classify into histological subcategories was varied, with the greatest success being with classifying ductal carcinoma (accuracy 0.95), and the lowest success being with lobular carcinoma (accuracy 0.59). Multiclass ROC assessment of performance as a multiclass classifier yielded AUC values ≥0.97 in both benign and malignant subsets. In comparison with previous efforts, using older and larger convolutional network architectures with feature extraction pre-processing, our work highlights that modern, resource-efficient architectures can classify histopathological images with accuracy that at least matches that of previous efforts, without the need for labor-intensive feature extraction protocols. Suggestions to further refine the model are discussed.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。