Abstract
Segmentation and analysis of microscopic optical images is a fundamental task in the field of biomedicine. However, efficiently, accurately, and robustly segmenting regions of interest in these images presents a significant challenge. The segment anything model (SAM) has demonstrated remarkable generalization capabilities in natural image segmentation tasks, revealing its potential for segmenting microscopic optical images. In this study, we propose Brain-SAM, a general automatic segmentation model based on SAM for the automatic segmentation of microscopic optical images. Specifically, we introduce an automatic prompt encoder to enable high-throughput automated segmentation of these images. Additionally, we propose a segmentation optimizer to further enhance the model's segmentation performance. Testing on eight benchmark datasets, representing common scenarios in microscopic optical image segmentation, shows that Brain-SAM outperforms specialized segmentation models in the vast majority of segmentation tasks. Notably, on the Brain, Tek and Lectin3d datasets, Brain-SAM achieved IoU scores of 98.07%, 93.13% and 88.49% respectively, along with Dice scores of 99.03%, 96.44% and 93.89%. Moreover, we provide a series of rich, publicly available brain science image datasets created using fluorescence microscopic optical tomography (fMOST) technology.