Abstract
Virtual staining technology offers a promising solution to overcome the time-consuming and sample-consumption nature of conventional histochemical staining in breast cancer pathology. This study presents a novel framework integrating multispectral autofluorescence imaging with an optimized deep learning architecture to generate high-fidelity, label-free, hematoxylin and eosin-equivalent images. We constructed a multimodal database containing clinical specimens, mouse models, and organoid co-cultures. By enhancing CycleGAN with saliency and global feature consistency losses, multispectral autofluorescence imaging-to-H&E virtual staining performance was significantly improved. This framework learns from unpaired datasets, eliminating the need for pixel-level registration. In blinded evaluations by five board-certified pathologists, 82.2% of virtual staining images achieved clinical scores comparable to conventional staining, with no statistical differences in key diagnostic indices. Moreover, this approach is non-destructive-the same tissue section remains intact for subsequent assays such as single-nucleus RNA sequencing or spatial transcriptomics, maximizing the utility of precious biopsy samples. In summary, this robust framework enables the rapid, non-destructive generation of diagnostic-grade breast cancer pathological images, making it a potential tool for clinical diagnostics and mechanistic studies across diverse biological systems.