Personalized Neural Architecture Search for Federated Learning
Federated Learning (FL) is a recently proposed learning paradigm for decentralized devices to collaboratively train a predictive model without exchanging private data. Existing FL frameworks, however, assume a one-size-fit-all model architecture to be collectively trained by local devices, which is determined prior to observing their data. Even with good engineering acumen, this often falls apart when local tasks are different and require diverging choices of architecture modelling to learn effectively. This motivates us to develop a novel personalized neural architecture search (NAS) algorithm for FL. Our algorithm, FedPNAS, learns a base architecture that can be structurally personalized for quick adaptation to each local task. We empirically show that FedPNAS significantly outperforms other NAS and FL benchmarks on several real-world datasets.
PDF Abstract