Abstract
We address the problem of empirical privacy auditing for differentially private stochas- tic gradient descent (DP-SGD) under the hidden state threat model, where adversaries only observe the final model parameters. Our work introduces a gradient-crafting framework that enables tighter auditing by allowing adversaries to pre-specify worst-case gradient sequences without access to interme- diate training checkpoints. We demonstrate that when a data point is used at every optimization step, hiding intermediate models provides no privacy amplification beyond standard composition bounds. For less frequent data usage patterns, we identify regimes where privacy amplification may occur for non-convex problems, though the effect is weaker than in convex settings. Our findings clarify the actual privacy loss in practical DP-SGD deployments and provide foundational insights for improved privacy accounting in the hidden state model. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-026-38537-0.