Search Results for author: Jianmin Chen

Found 3 papers, 2 papers with code

Revisiting Distributed Synchronous SGD

no code implementations19 Feb 2017 Xinghao Pan, Jianmin Chen, Rajat Monga, Samy Bengio, Rafal Jozefowicz

Distributed training of deep learning models on large-scale training data is typically conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony.

Stochastic Optimization

Revisiting Distributed Synchronous SGD

4 code implementations4 Apr 2016 Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, Rafal Jozefowicz

Distributed training of deep learning models on large-scale training data is typically conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.