Abstract
Layer-wise segmentation of three-dimensional (3D) ultrasound for chronic lower back pain (cLBP) requires a large amount of labeled images. To mitigate this burden, we propose a Generative Reinforcement Network (GRN) that integrates a generative adversarial network (GAN) framework with a segmentation model. The generator is coupled to a segmentor via segmentation-aware feedback and regularized by a discriminator. At each iteration, the segmentation loss is back-propagated into the generator to produce easy-to-learn reconstructions that directly reduce downstream segmentation error (reinforcement augmentation, RAug), while adversarial feedback from the discriminator (PatchGAN) encourages realistic reconstructions. We also introduce segmentation-guided enhancement (SGE), where the pre-trained generator enhances input images at inference to improve segmentation. GRN has two variants: GRN-SEL, which uses RAug only, and GRN-SSL, which additionally applies interpolation-consistency training (ICT) on unlabeled data by interpolating generator-reconstructed pairs and enforcing prediction consistency. We evaluate GRN primarily on a fully annotated lumbar back ultrasound dataset (MUSCLE). Two public benchmark datasets (skin lesion, Kvasir) were also used to demonstrate its generalizability. On the MUSCLE dataset, GRN-SEL with SGE reduces labeling efforts by up to 70% while improving the Dice Similarity Coefficient (DSC) by 1.98% compared to the models trained on fully labeled datasets. Across all three datasets and label fractions, GRN consistently outperforms state-of-the-art semi-supervised methods. These results suggest the effectiveness of the GRN framework in optimizing segmentation performance with significantly less labeled data. The source code is publicly available at https://github.com/Francisdadada/GRN.