Abstract
Brain tumor segmentation is a crucial task in medical imaging that has a significant impact on diagnosis and treatment planning. This study introduces a novel 3D pooling layer within the U-Net 3D architecture to enhance segmentation accuracy from multimodal MRI. The method addresses the limitations of conventional pooling techniques by considering the interdependencies between MRI pixels, thereby improving the model's ability to capture complex tumor structures. To increase robustness to intensity variation, two complementary normalization pipelines were trained independently with identical networks, and predictions from selected epochs were fused by simple probability averaging to form the final ensemble. Evaluation was conducted on BraTS2020 using five-fold cross-validation. On the validation set, the ensemble achieved Dice (ET/TC/WT)=0.8299/0.8882/0.8986 and HD95=4.40/4.95/11.14, reflecting consistent gains over max-pooling variants and comparing favorably with recent methods while using a lightweight fusion mechanism. These results confirm the effectiveness of the proposed 3D pooling approach and pave the way for more robust algorithms in automated brain tumor segmentation.