Paper published in a book (Scientific congresses, symposiums and conference proceedings)
An Analysis of Byzantine-Tolerant Aggregation Mechanisms on Model Poisoning in Federated Learning
ROSZEL, Mary; NORVILL, Robert; STATE, Radu
2022In Torra, Vicenç (Ed.) Modeling Decisions for Artificial Intelligence - 19th International Conference, MDAI 2022, Proceedings
Peer reviewed
 

Files


Full Text
MDAI_Federated_Defenses-3.pdf
Author postprint (15.12 MB)
Request a copy

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Aggregation mechanism; Aggregation methods; Attacks scenarios; Backdoors; Fixed frequency; Learning settings; Machine learning models; Malicious participant; Model updates; Poisoning attacks; Theoretical Computer Science; Computer Science (all)
Abstract :
[en] Federated learning is a distributed setting where multiple participants jointly train a machine learning model without exchanging data. Recent work has found that federated learning is vulnerable to backdoor model poisoning attacks, where an attacker leverages the unique environment to submit malicious model updates. To address these malicious participants, several Byzantine-Tolerant aggregation methods have been applied to the federated learning setting, including Krum, Multi-Krum, RFA, and Norm-Difference Clipping. In this work, we analyze the effectiveness and limits of each aggregation method and provide a thorough analysis of their success in various fixed-frequency attack settings. Further, we analyze the fairness of such aggregation methods on the success of the model on its intended tasks. Our results indicate that only one defense can successfully mitigate attacks in all attack scenarios, but a significant fairness issue is observed, highlighting the issues with preventing malicious attacks in a federated setting.
Disciplines :
Computer science
Author, co-author :
ROSZEL, Mary  ;  University of Luxembourg
NORVILL, Robert ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust > SEDAN > Team Radu STATE
STATE, Radu  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SEDAN
External co-authors :
no
Language :
English
Title :
An Analysis of Byzantine-Tolerant Aggregation Mechanisms on Model Poisoning in Federated Learning
Publication date :
30 August 2022
Event name :
Modeling Decisions for Artificial Intelligence - 19th International Conference, MDAI 2022
Event place :
Sant Cugat, Esp
Event date :
30-08-2022 => 02-09-2022
Audience :
International
Main work title :
Modeling Decisions for Artificial Intelligence - 19th International Conference, MDAI 2022, Proceedings
Editor :
Torra, Vicenç
Publisher :
Springer Science and Business Media Deutschland GmbH
ISBN/EAN :
978-3-03-113447-0
Peer reviewed :
Peer reviewed
Available on ORBilu :
since 01 March 2024

Statistics


Number of views
66 (1 by Unilu)
Number of downloads
1 (1 by Unilu)

Scopus citations®
 
4
Scopus citations®
without self-citations
4
OpenAlex citations
 
2
WoS citations
 
2

Bibliography


Similar publications



Contact ORBilu