UltraLight VM-UNet: Parallel Vision Mamba significantly reduces parameters for skin lesion segmentation

UltraLight VM-UNet:并行视觉 Mamba 显著减少了皮肤病变分割的参数

阅读:1

Abstract

Traditionally, to improve the segmentation performance of models, most approaches prefer to use more complex modules. This is not suitable for the medical field, especially for mobile medical devices, where computationally loaded models are not suitable for real clinical environments due to computational resource constraints. Recently, state-space models, represented by Mamba, have become a strong competitor to traditional convolutional neural networks and transformers. In this paper, we deeply explore the key elements of parameter influence in Mamba and propose an UltraLight Vision Mamba UNet (UltraLight VM-UNet) based on this. Specifically, we propose a method for processing features in parallel Vision Mamba, named the PVM Layer, which achieves competitive performance with the lowest computational complexity while keeping the overall number of processing channels constant. We conducted segmentation experiments on three public datasets of skin lesions and showed that UltraLight VM-UNet exhibits competitive performance with only 0.049M parameters and 0.060 GFLOPs.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。