Abstract
Accurate segmentation is essential for creating digital twins based on volumetric images for high fidelity composite material analysis. Conventional techniques typically require labor-intensive and time-consuming manual effort, restricting their practical use. This paper presents a deep learning model, MBL-TransUNet, to address challenges in accurate tow-tow boundary identification via a Boundary-guided Learning module. Fabrics exhibit periodic characteristics; therefore, a Multi-scale Feature Fusion module was integrated to capture both local details and global patterns, thereby enhancing feature fusion and facilitating the effective integration of information across multiple scales. Furthermore, BatchFormerV2 was used to improve generalization through cross-batch learning. Experimental results show that MBL-TransUNet outperforms TransUNet. MIoU improved by 2.38%. In the zero-shot experiment, MIoU increased by 4.23%. The model demonstrates higher accuracy and robustness compared to existing methods. Ablation studies confirm that integrating these modules achieves optimal segmentation performance.