Dynamic Noise Injection for Gradient Leakage Resistance in Federated Learning
编号:28 访问权限:仅限参会人 更新:2025-11-19 09:19:53 浏览:1次 拓展类型1

报告开始:暂无开始时间(Asia/Amman)

报告时间:暂无持续时间

所在会场:[暂无会议] [暂无会议段]

暂无文件

摘要
Federated learning enables collaborative machine learning on decentralized data, but faces a critical privacy challenge from gradient leakage attacks, which can reconstruct sensitive user data from shared model updates. While differential privacy defenses like static noise injection are common, they often establish a poor privacy-utility trade-off by indiscriminately adding noise, thereby degrading model accuracy. Conventional dynamic methods also fall short, as they typically fail to adapt to the fine-grained, contextual dynamics of local training. To overcome these limitations, we propose FedDynaNoise, a novel privacy-preserving framework that introduces a triple-adaptive noise injection mechanism. The noise level is dynamically and intelligently calibrated based on three key factors such as the training round, the layer-wise gradient sensitivity, and the prediction entropy. This multi-faceted approach ensures that the privacy budget is used efficiently and effectively. We conducted a comprehensive evaluation of FedDynaNoise on four image classification benchmarks against gradient inversion attacks. Our experiments show that FedDynaNoise provides robust privacy protection, achieving a high reconstruction mean square error of approximately 0.65. This is a significant improvement over static noise and conventional dynamic noise baselines around 0.13 and 0.31. Remarkably, this strong defense is achieved with minimal impact on model utility, with FedDynaNoise reaching a test accuracy of 93.9%, only a slight decrease from the 95.1% of a non-private model. Our work demonstrates that FedDynaNoise offers a superior privacy-utility balance, presenting a practical and effective solution for building more secure, trustworthy, and accurate federated learning systems.
关键词
Adaptive Noise,Differential Privacy,Edge Computing,Gradient Inversion,Privacy Utility
报告人
Kundjanasith Thonglek
Instructor Kasetsart University

稿件作者
Kundjanasith Thonglek Kasetsart University
Kanathip Pandee Kasetsart University
Ittidet Namlao Kasetsart University
Patcharaphol Toopprasom Kasetsart University
发表评论
验证码 看不清楚,更换一张
全部评论
重要日期
  • 会议日期

    12月29日

    2025

    12月31日

    2025

  • 11月30日 2025

    初稿截稿日期

  • 12月30日 2025

    报告提交截止日期

  • 12月30日 2025

    注册截止日期

主办单位
国际科学联合会
承办单位
扎尔卡大学
历届会议
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询