Abstract
Microscopy with ultraviolet surface excitation (MUSE) enables rapid fluorescence imaging of tissue surfaces, but MUSE images differ markedly from conventional hematoxylin and eosin images, making cancer delineation challenging for routine pathological practice. We investigated the feasibility of deep learning-based semantic segmentation, which assigns a class label to each pixel, for pixel-wise breast cancer detection in MUSE images. Fresh breast tissues from 30 mastectomy patients with breast cancer were stained with terbium and Hoechst and imaged by MUSE. A total of 150 cancerous images (five per case) were manually annotated into cancerous and non-cancerous classes, and 300 non-cancerous images (ten per case) were collected. Models were trained and evaluated using five-fold nested cross-validation, comparing a cancer-only (CO) model trained solely on cancerous images with a cancer plus non-cancer (CN) model trained on both cancerous and non-cancerous images. The CO model achieved a higher Dice score than the CN model (CO, 0.7478; CN, 0.7343). Sliding window-based majority voting post-processing reduced scattered false-positive areas and improved Dice scores (CO, 0.7984; CN, 0.7849). These results support the feasibility of deep learning-based semantic segmentation for visualizing breast cancer regions and provide a basis for future quantitative applications using MUSE images.