FedPD: Defending federated prototype learning against backdoor attacks

Neural Netw. 2024 Dec 10:184:107016. doi: 10.1016/j.neunet.2024.107016. Online ahead of print.

Abstract

Federated Learning (FL) is an efficient, distributed machine learning paradigm that enables multiple clients to jointly train high-performance deep learning models while maintaining training data locally. However, due to its distributed computing nature, malicious clients can manipulate the prediction of the trained model through backdoor attacks. Existing defense methods require significant computational and communication overhead during the training or testing phases, limiting their practicality in resource-constrained scenarios and being unsuitable for the Non-IID data distribution typical in general FL scenarios. To address these challenges, we propose the FedPD framework, in which servers and clients exchange prototypes rather than model parameters, preventing the implantation of backdoor channels by malicious clients during FL training and effectively eliminating the success of backdoor attacks at the source, significantly reducing communication overhead. Additionally, prototypes can serve as global knowledge to correct clients' local training. Experiments and performance analysis show that FedPD achieves superior and consistent defense performance compared to existing representative approaches against backdoor attacks. In specific scenarios, FedPD can reduce the success rate of attacks by 90.73% compared to FedAvg without defense while maintaining the main task accuracy above 90%.

Keywords: Backdoor attacks; Federated learning; Non-IID data; Prototypical networks.