Abstract :
[en] The Federated Learning paradigm is a distributed machine learning strategy, developed for settings where training data is owned by distributed devices and cannot be shared with others. Federated Learning circumvents this constraint by carrying out model training in distribution, so that each participant, or client, trains a local model only on its own data. The parameters of these local models are shared intermittently among participants and aggregated to enhance model accuracy. This strategy has shown impressive success, and has been rapidly adopted by the industry in efforts to overcome confidentiality and resource constraints in model training.
However, the application of FL to real-world settings brings additional challenges, many associated with heterogeneity between participants. Research into mitigating these difficulties in Federated Learning has largely focused on only two particular types of
heterogeneity: the unbalanced distribution of training data, and differences in client resources. Yet many more types of heterogeneity exist, and some are becoming increasingly relevant as the capability of FL expands to cover more and more complex real-world problems, from the tuning of large language models to enabling machine learning on edge devices. In this work, we discuss a novel type of heterogeneity that is likely to become increasingly relevant in future applications: this is preference heterogeneity, emerging when clients learn under multiple objectives, with different importance assigned to each objective on different clients.
In this work, we discuss the implications of this type of heterogeneity and propose a FedPref, a first algorithm designed to facilitate personalised federated learning in this setting. We demonstrate the effectiveness of the algorithm across several different problems, preference distributions and model architectures. In addition, we introduce a new analytical point of view, based on multi-objective metrics, for evaluating the performance of federated algorithms in this setting beyond the traditional client-focused metrics. We perform a second experimental analysis based in this view, and show that FedPref outperforms compared algorithms.
Scopus citations®
without self-citations
2