FedClean : a defense mechanism against parameter poisoning attacks in federated learning
Kumar, Abhishek; Khimani, Vivek; Chatzopoulos, Dimitris; Hui, Pan (2022-04-27)
A. Kumar, V. Khimani, D. Chatzopoulos and P. Hui, "FedClean: A Defense Mechanism against Parameter Poisoning Attacks in Federated Learning," ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 4333-4337, doi: 10.1109/ICASSP43922.2022.9747497.
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
https://urn.fi/URN:NBN:fi-fe2022101161615
Tiivistelmä
Abstract
In Federated learning (FL) systems, a centralized entity (server), instead of access to the training data, has access to model parameter updates computed by each participant independently and based solely on their samples. Unfortunately, FL is susceptible to model poisoning attacks, in which malicious or malfunctioning entities share polluted updates that can compromise the model’s accuracy. In this study, we propose FedClean, an FL mechanism that is robust to model poisoning attacks. The accuracy of the models trained with the assistance of FedClean is close to the one where malicious entities do not participate.
Kokoelmat
- Avoin saatavuus [32009]