Abstract
PURPOSE: We investigate the use of latent diffusion models (LDMs) for synthesizing and enhancing photon-counting chest computed tomography (CT) images. We evaluate the models' capabilities in two main tasks: image generation for dataset augmentation and super-resolution (SR) for improving image quality, aiming to support diagnostic accuracy and accessibility to high-resolution data. APPROACH: The proposed framework combines a variational autoencoder-based latent encoder (AutoencoderKL) and a denoising diffusion model, trained under multiple conditioning tests. Eight experiments were conducted across generative and SR tasks, exploring the effects of different conditioning strategies, including segmentation masks and class labels (e.g., lung versus soft tissue), as well as varying loss functions. RESULTS: Unconditioned LDMs produced hallucinated anatomy, lacking clinical interpretability. Conditioning with segmentation masks and anatomical labels considerably improved structural fidelity. The best results for image generation achieved a multiscale structural similarity index measure (MS-SSIM) = 0.7135 and peak signal-to-noise ratio (PSNR) = 24.53, whereas SR tasks reached MS-SSIM = 0.85 and PSNR = 27.31, comparable to recent diffusion-based benchmarks. CONCLUSIONS: LDMs show strong potential for both augmentation and SR of photon-counting chest CT images. When guided by segmentation masks and class labels, these models preserve anatomical structure and reduce hallucination risks. The results support their use in clinically relevant scenarios, providing controllable and high-fidelity image synthesis.