Abstract
BACKGROUND: Breast ultrasound images contain highly sensitive anatomical and diagnostic information. Deep learning (DL) classifiers trained on such data can inadvertently memorize patient-specific features, creating privacy leakage even when identifiers are removed. Differential privacy (DP) offers a mathematically rigorous bind on this leakage, yet the scarcity of public breast ultrasound datasets and their markedly different imaging characteristics mean that the impact of DP on diagnostic accuracy remains insufficiently quantified. We therefore conducted a systematic experimental assessment of the privacy-utility trade-off to generate parameter-specific evidence for clinical deployment. METHODS: We curated a 2,149-image breast ultrasound dataset (1,289 benign, 860 malignant) from Roboflow, split 70%/20%/10% for training/validation/testing. After ImageNet pre-training, we compared three backbones-ResNet50, EfficientNet-B0, and ViT-B/16-by fine-tuning only the last layer for 100 epochs and selecting the best performer. Opacus was then applied to the top-ranked model (ViT-B/16) across 12 (ε, δ) configurations (ε values: 0.01, 0.1, 1, 10; δ values: 1e-5, 1e-3, 0.1), with each repeated 10 times. Accuracy, F1-score and area under the receiver operating characteristic curve (AUC-ROC) were evaluated with 95% confidence intervals (CI) and one-way repeated-measures analysis of variance (ANOVA). RESULTS: Without DP, ViT-B/16 achieved 94.9% accuracy, 93.4% F1, and 99.2% AUC-ROC. Introducing DP produced a clear monotonic privacy-utility trade-off: performance declined as ε decreased. At ε=10, δ=1×10(-5) the model retained 89.5% accuracy, 86.0% F1, and 96.1% AUC-ROC, whereas ε=0.01, δ=1×10(-5) collapsed to 48.3%, 43.5%, and 48.6%. ANOVA confirmed ε as the dominant factor (P<0.001). A clinically viable compromise emerged at ε=1, δ=0.1 (87.4% accuracy, 82.8% F1, 94.1% AUC-ROC). CONCLUSIONS: Our systematic evaluation demonstrates that DP can be integrated into breast ultrasound classification while maintaining quantifiable privacy guarantees. Selecting ε≥1 and δ=0.1 preserves diagnostic accuracy close to the non-private baseline and provides a practical reference for privacy-preserving clinical deployment.