Uncertainty- and hardness-weighted loss functions for medical image segmentation

医学图像分割的不确定性加权和硬度加权损失函数

阅读:2

Abstract

Accurate segmentation of medical images is essential for various image processing tasks and is now predominantly achieved using deep learning techniques. However, existing approaches often employ loss functions that fail to account for pixel-level differences in prediction uncertainty or hardness. This limitation frequently results in relatively large segmentation errors, particularly in object boundary regions. To address the limitation, we developed a novel class of uncertainty- / hardness-weighted loss functions by introducing two distinct pixel-wise weighting schemes: probability-guided uncertainty (PGU) and region-enhanced hardness (REH) weights. These weights, derived from the differences between network predictions and their corresponding ground truths, were designed to emphasize challenging pixels while reducing segmentation uncertainties. We validated these loss functions by integrating them with two classical neural networks, i.e., Swin Transformer based U-shape network (Swin-Unet) and V-shape network (V-Net) to segment two- and three-dimensional target objects across four different images datasets, including Retinal Fundus Glaucoma Challenge (REFUGE) dataset, Retinal Vascular Tree Analysis (RETA) dataset, optical coherence tomography (OCT) dataset, and Atria Segmentation Challenge (ASC) dataset. Extensive experiments demonstrated that our developed loss functions outperformed classical losses, such as cross-entropy (CE) and Dice losses, along with their variants, highlighting the effectiveness and generalization of the introduced weighting schemes. The source code is available at https://github.com/wmuLei/uhLoss.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。