Optimizing Differential Privacy: The Role of Model Parallelism and Iteration Subsampling
编号:199
访问权限:仅限参会人
更新:2025-12-24 14:17:18 浏览:2次
拓展类型2
摘要
Guaranteeing data privacy when working with machine learning is a difficult issue, especially in the federated learning or distributed learning settings. Differential privacy (DP) is often used as an approach to resolve this issue, which works by introducing noise into the training process. Excessive noise can, however, impair model performance. This paper investigates a different strategy by exploiting structured randomness in model parallelism and iteration subsampling to boost privacy without compromising accuracy and introduce a coherent framework that systematically combines model partitioning—where every client updates only part of model parameters—and balanced iteration subsampling—where points are involved in a constant number of rounds of training. Our analysis provides privacy amplification guarantees for both approaches, showing that these structured randomization methods lead to much better privacy than traditional Poisson subsampling or independent dropout techniques. This research also empirically verifies our solution on deep learning models, exhibiting better trade-offs between utility of the model and privacy protection. The proposed solution is able to diminish dependency on the high levels of noise by being a scalable and efficient privacy-preserving machine learning approach. The paper adds to the wider scope of secure AI through its unique contribution to the area of optimization in differential privacy, balancing the areas of privacy, efficiency of computation, and accuracy of the model
关键词
Differential Privacy, Model Parallelism, Federated Learning, Iteration Subsampling, Privacy Amplification, Machine Learning Security
稿件作者
Kanchan Yadav
GLA University Mathura
KRISHNAKANT Dixit
Gla University
发表评论