Tighter privacy auditing of differentially private stochastic gradient descent in the hidden state threat model

针对隐藏状态威胁模型中的差分隐私随机梯度下降算法,加强隐私审计

阅读:4

Abstract

We address the problem of empirical privacy auditing for differentially private stochas- tic gradient descent (DP-SGD) under the hidden state threat model, where adversaries only observe the final model parameters. Our work introduces a gradient-crafting framework that enables tighter auditing by allowing adversaries to pre-specify worst-case gradient sequences without access to interme- diate training checkpoints. We demonstrate that when a data point is used at every optimization step, hiding intermediate models provides no privacy amplification beyond standard composition bounds. For less frequent data usage patterns, we identify regimes where privacy amplification may occur for non-convex problems, though the effect is weaker than in convex settings. Our findings clarify the actual privacy loss in practical DP-SGD deployments and provide foundational insights for improved privacy accounting in the hidden state model. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-026-38537-0.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。