Enhancing Edge Computing with Federated Learning: Privacy and Performance Trade-offs
编号:211
访问权限:仅限参会人
更新:2025-12-24 14:18:41 浏览:43次
拓展类型2
摘要
Federated learning (FL), which allows decentralized model training with no trade-offs in terms of data privacy, has been perceived as an important way to promote edge computing. The application of FL in edge computing systems is proposed in the paper with an emphasis on privacy-performance trade-offs. FL allows for compliance with laws related to privacy while allowing for cooperative learning among distributed edge devices without exposing sensitive information. Nevertheless, disadvantages exist in the decentralized method, including high communication costs, model drift, and limited resources. The research puts forward a privacy-conscious federated learning model founded on differential privacy and secure aggregation with the aim of ensuring maximum security for the data at the expense of losing minimum model accuracy. The model is tested with real-world edge computing setups with significant reduction in communication latency and superior convergence rates in the model. The findings indicate that the incorporation of privacy-preserving mechanisms has negligible adverse effects in terms of model performance, proving the feasibility of trading off between privacy and computational efficiency in edge-based FL systems. The paper culminates with remarks on the potential for optimizing model aggregation methodologies and lessening system heterogeneity with the aim of enhancing scalability and robustness in applications in edge intelligence
关键词
Federated Learning, Edge Computing, Privacy Preservation, Differential Privacy, Secure Aggregation, Model Convergence
稿件作者
Mridul Dixit
GLA University
Rakesh Kumar
GLA University
发表评论