Article (Scientific journals)
FedPref: Federated Learning Across Heterogeneous Multi-objective Preferences
HARTMANN, Lena Maria; DANOY, Grégoire; BOUVRY, Pascal
2025In ACM Transactions on Modeling and Performance Evaluation of Computing Systems
Peer reviewed
 

Files


Full Text
ToMPECS_FedPref_final.pdf
Author preprint (21.13 MB) Creative Commons License - Attribution
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Federated Learning; Federated Multi-objective Learning; Multi-objective Learning; Heterogeneous Federated Learning; Personalized Federated Learning
Abstract :
[en] The Federated Learning paradigm is a distributed machine learning strategy, developed for settings where training data is owned by distributed devices and cannot be shared with others. Federated Learning circumvents this constraint by carrying out model training in distribution, so that each participant, or client, trains a local model only on its own data. The parameters of these local models are shared intermittently among participants and aggregated to enhance model accuracy. This strategy has shown impressive success, and has been rapidly adopted by the industry in efforts to overcome confidentiality and resource constraints in model training. However, the application of FL to real-world settings brings additional challenges, many associated with heterogeneity between participants. Research into mitigating these difficulties in Federated Learning has largely focused on only two particular types of heterogeneity: the unbalanced distribution of training data, and differences in client resources. Yet many more types of heterogeneity exist, and some are becoming increasingly relevant as the capability of FL expands to cover more and more complex real-world problems, from the tuning of large language models to enabling machine learning on edge devices. In this work, we discuss a novel type of heterogeneity that is likely to become increasingly relevant in future applications: this is preference heterogeneity, emerging when clients learn under multiple objectives, with different importance assigned to each objective on different clients. In this work, we discuss the implications of this type of heterogeneity and propose a FedPref, a first algorithm designed to facilitate personalised federated learning in this setting. We demonstrate the effectiveness of the algorithm across several different problems, preference distributions and model architectures. In addition, we introduce a new analytical point of view, based on multi-objective metrics, for evaluating the performance of federated algorithms in this setting beyond the traditional client-focused metrics. We perform a second experimental analysis based in this view, and show that FedPref outperforms compared algorithms.
Disciplines :
Computer science
Author, co-author :
HARTMANN, Lena Maria  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > PCOG
DANOY, Grégoire  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
BOUVRY, Pascal ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
External co-authors :
no
Language :
English
Title :
FedPref: Federated Learning Across Heterogeneous Multi-objective Preferences
Publication date :
2025
Journal title :
ACM Transactions on Modeling and Performance Evaluation of Computing Systems
Special issue title :
Special Issue on Performance Evaluation of Federated Learning Systems
Peer reviewed :
Peer reviewed
Focus Area :
Security, Reliability and Trust
Name of the research project :
U-AGR-8025 - ILNAS PC2 - BOUVRY Pascal
Available on ORBilu :
since 18 December 2024

Statistics


Number of views
161 (17 by Unilu)
Number of downloads
92 (5 by Unilu)

Scopus citations®
 
2
Scopus citations®
without self-citations
2
OpenCitations
 
0
OpenAlex citations
 
1

Bibliography


Similar publications



Contact ORBilu