Abstract
INTRODUCTION: The segmentation of brain tumor MRI images is one of the most challenging tasks because of the variability and complexity associated with tumor tissues. This study introduces PoSAM-ULTRA, an improved segmentation framework designed to enhance the accuracy and robustness of brain tumor segmentation. METHODS: PoSAM-ULTRA employs the Polar-Bear Foraging Optimisation (PBFO) algorithm for hyperparameter tuning and utilizes an improved Segment Anything Model as its backbone. The framework is based on a ResNet-34 encoder modified to accept a four-channel input (RGB + prior information). Multi-scale feature extraction is performed via DownBlocks, while discriminative feature learning is enhanced using the Convolutional Block Attention Module (CBAM). Attention Gates are incorporated to ensure effective skip connections, and a multistage decoder is used for robust upsampling and feature integration. The model was evaluated on a dataset from the Integrative Genomic Analysis of Diffuse Lower Grade Gliomas (LGG) and compared with UNet, UNet++, and nnUNet. RESULTS: The proposed PoSAM-ULTRA model outperformed the baseline models, achieving superior performance with a Dice score of 91.4%, IoU of 88.9%, Accuracy of 99.8%, Precision of 95.2%, and Recall of 93.3%. DISCUSSION: The obtained results demonstrate the robustness and reliability of PoSAM-ULTRA in handling the complexity of brain tumor MRI segmentation, highlighting its effectiveness for challenging medical image segmentation tasks.