Federated Inference through Aligning Local Representations and Learning a Consensus Graph

29 Sep 2021  ·  Tengfei Ma, Trong Nghia Hoang, Jie Chen ·

Machine learning is faced with many data challenges when applied in practice. Among them, a notable barrier is that data are distributed and sharing is unrealistic for volume and privacy reasons. Federated learning is a recent formalism to tackle this challenge, so that data owners can develop a common model jointly but use it separately. In this work, we consider a less addressed scenario where a datum consists of multiple parts, each of which belongs to a separate owner. In this scenario, joint efforts are required not only in learning but also in inference. We study \emph{federated inference}, which allows each data owner to learn its own model that captures local data characteristics and copes with data heterogeneity. On the top is a federation of the local data representations, performing global inference that incorporates all distributed parts collectively. To enhance this local--global framework, we propose aligning the ambiguous data representations caused by arbitrary arrangement of neurons in local neural network models, as well as learning a consensus graph among data owners in the global model to improve performance. We demonstrate effectiveness of the proposed framework on four real-life data sets including power grid systems and traffic networks.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here