No Cover Image

Conference Paper/Proceeding/Abstract 240 views 4 downloads

Gradients Stand-in for Defending Deep Leakage in Federated Learning

Freya Hu, Hans Ren, Chen Hu, Yiming Li, Jingjing Deng, Xianghua Xie Orcid Logo

International Conference on Computing in Natural Sciences, Biomedicine and Engineering

Swansea University Authors: Freya Hu, Hans Ren, Chen Hu, Yiming Li, Xianghua Xie Orcid Logo

  • 66608.pdf

    PDF | Accepted Manuscript

    Author accepted manuscript document released under the terms of a Creative Commons CC-BY licence using the Swansea University Research Publications Policy (rights retention).

    Download (1.99MB)

Abstract

Federated Learning (FL) has become a cornerstone of privacy protection, shifting the paradigm towards localizing sensitive data while only sending model gradients to a central server. This strategy is designed to reinforce privacy protections and minimize the vulnerabilities inherent in centralized...

Full description

Published in: International Conference on Computing in Natural Sciences, Biomedicine and Engineering
Published: 2024
URI: https://cronfa.swan.ac.uk/Record/cronfa66608
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract: Federated Learning (FL) has become a cornerstone of privacy protection, shifting the paradigm towards localizing sensitive data while only sending model gradients to a central server. This strategy is designed to reinforce privacy protections and minimize the vulnerabilities inherent in centralized data storage systems. Despite its innovative approach, recent empirical studies have highlighted potential weaknesses in FL, notably regarding the exchange of gradients. In response, this studyintroduces a novel, efficacious method aimed at safeguarding against gradient leakage, namely, “AdaDefense”. Following the idea that model convergence can be achieved by using differenttypes of optimization methods, we suggest using a local standin rather than the actual local gradient for global gradient aggregation on the central server. This proposed approach not only effectively prevents gradient leakage, but also ensures that the overall performance of the model remains largely unaffected. Delving into the theoretical dimensions, we explore how gradients may inadvertently leak private information and present a theoretical framework supporting the efficacy of our proposed method. Extensive empirical tests, supported by popular benchmark experiments,validate that our approach maintains model integrity and is robust against gradient leakage, marking an important step in our pursuit of safe and efficient FL.
Item Description: In Press
College: Faculty of Science and Engineering