Posterior Estimation in Federated Learning

Thesis Project created by Martin Gullbrandson English
1y ago update
Thesis written at Chalmers in collaboration with Scaleout Systems and AI Sweden.


Federated learning is an emerging field within machine learning that enable learning on distributed data. The concept is to train models locally on edge nodes, and globally aggregate these models across a graph of edge nodes. Doing so increase both end-user privacy and decrease communication costs, as data is not communicated. With data censored, the learning task is considerably more difficult than in the centralized case. It is further complicated as data is usually not independent and identically distributed (non-IID) among edge nodes.

This thesis explores how a Bayesian perspective may consider the inherent model variance when aggregating different locally trained models, specifically in non-IID environments. This is done by evaluating algorithms that take advantage of a probabilistic perspective, such as FedPA, and compare to a standard algorithm, FedAvg. Further, we propose a novel lightweight approach using kernel density estimation to estimate a posterior distribution and using mean-shift for model inference. We call the algorithm federated kernel posterior (FedKP) and show that it outperforms other algorithms on bench-marking examples, while requiring no extra information and without any hyperparameter-optimization.


Research & Development
Machine Learning