Abstract
Material decomposition in X-ray imaging is essential for enhancing tissue differentiation and reducing the radiation dose, but the clinical adoption of photon-counting detectors (PCDs) is limited by their high cost and technical complexity. To address this, we propose Dual-head Pix2Pix, a PCD-guided deep learning framework that enables simultaneous iodine and bone decomposition from single-energy X-ray projections acquired with conventional energy-integrating detectors. The model was trained and tested on 1440 groups of energy-integrating detector (EID) projections with their corresponding iodine/bone decomposition images. Experimental results demonstrate that the Dual-head Pix2Pix outperforms baseline models. For iodine decomposition, it achieved a mean absolute error (MAE) of 5.30 ± 1.81, representing an ~10% improvement over Pix2Pix (5.92) and a substantial advantage over CycleGAN (10.39). For bone decomposition, the MAE was reduced to 9.55 ± 2.49, an ~6% improvement over Pix2Pix (10.18). Moreover, Dual-head Pix2Pix consistently achieved the highest MS-SSIM, PSNR, and Pearson correlation coefficients across all benchmarks. In addition, we performed a cross-domain validation using projection images acquired from a conventional EID-CT system. The results show that the model successfully achieved the effective separation of iodine and bone in this new domain, demonstrating a strong generalization capability beyond the training distribution. In summary, Dual-head Pix2Pix provides a cost-effective, scalable, and hardware-friendly solution for accurate dual-material decomposition, paving the way for the broader clinical and industrial adoption of material-specific imaging without requiring PCDs.