Abstract
Digital chest tomosynthesis refers to the 3D reconstruction of low-dose projection images acquired within a limited angular range. The reconstructions have lower depth resolution and are more prone to motion artifacts compared to computed tomography (CT). While recent deep learning approaches aim to reconstruct full-resolution CT volumes from projections, they are computationally demanding due to the high resolution and inherently 3D nature of the task. In this study, we propose a more efficient alternative. Our deep learning-based framework reconstructs sagittal CT slices from small patches of projection data, significantly lowering memory demands. Rather than predicting continuous Houndsfield unit (HU) values, we segment voxels into air, soft tissue, or bone classes. Our results show that the method captures coarse structural features and depth information with high consistency, but struggles to reconstruct fine details. While not yet suitable for clinical deployment, the approach highlights a promising direction for low-resource tomosynthesis-based volumetric imaging.