Abstract
Diffuse optical tomography (DOT) performed using deep-learning allows high-speed reconstruction of tissue optical properties and could thereby enable image-guided scanning, e.g., to enhance clinical breast imaging. Previously published models are geometry-specific and, therefore, require extensive data generation and training for each use case, restricting the scanning protocol at the point of use. A transformer-based architecture is proposed to overcome these obstacles that encode spatially unstructured DOT measurements, enabling a single trained model to handle arbitrary scanning pathways and measurement density. The model is demonstrated with breast tissue-emulating simulated and phantom data, yielding - for 24 mm-deep absorptions (μ (a) ) and reduced scattering (μ (s) ') images, respectively - average RMSEs of 0.0095±0.0023 cm(-1) and 1.95±0.78 cm(-1), Sørensen-Dice coefficients of 0.55±0.12 and 0.67±0.1, and anomaly contrast of 79±10% and 93.3±4.6% of the ground-truth contrast, with an effective imaging speed of 14 Hz. The average absolute μ (a) and μ (s) ' values of homogeneous simulated examples were within 10% of the true values.