Consensus Driven Learning

20 May 2020  ·  Kyle Crandall, Dustin Webb ·

As the complexity of our neural network models grow, so too do the data and computation requirements for successful training. One proposed solution to this problem is training on a distributed network of computational devices, thus distributing the computational and data storage loads. This strategy has already seen some adoption by the likes of Google and other companies. In this paper we propose a new method of distributed, decentralized learning that allows a network of computation nodes to coordinate their training using asynchronous updates over an unreliable network while only having access to a local dataset. This is achieved by taking inspiration from Distributed Averaging Consensus algorithms to coordinate the various nodes. Sharing the internal model instead of the training data allows the original raw data to remain with the computation node. The asynchronous nature and lack of centralized coordination allows this paradigm to function with limited communication requirements. We demonstrate our method on the MNIST, Fashion MNIST, and CIFAR10 datasets. We show that our coordination method allows models to be learned on highly biased datasets, and in the presence of intermittent communication failure.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here