Browse > Methodology > Stochastic Optimization

Stochastic Optimization

81 papers with code · Methodology

State-of-the-art leaderboards

Greatest papers with code

Revisiting Distributed Synchronous SGD

4 Apr 2016tensorflow/models

Distributed training of deep learning models on large-scale training data is typically conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony.

STOCHASTIC OPTIMIZATION

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

ICLR 2019 Luolc/AdaBound

Adaptive optimization methods such as AdaGrad, RMSProp and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates.

STOCHASTIC OPTIMIZATION

Lookahead Optimizer: k steps forward, 1 step back

19 Jul 2019rwightman/pytorch-image-models

The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms.

IMAGE CLASSIFICATION MACHINE TRANSLATION STOCHASTIC OPTIMIZATION

SGDR: Stochastic Gradient Descent with Warm Restarts

13 Aug 2016rwightman/pytorch-image-models

Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions.

EEG STOCHASTIC OPTIMIZATION

On the Variance of the Adaptive Learning Rate and Beyond

8 Aug 2019LiyuanLucasLiu/RAdam

The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam.

IMAGE CLASSIFICATION LANGUAGE MODELLING MACHINE TRANSLATION STOCHASTIC OPTIMIZATION

Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks

27 May 2019NVIDIA/OpenSeq2Seq

We propose NovoGrad, an adaptive stochastic gradient descent method with layer-wise gradient normalization and decoupled weight decay.

STOCHASTIC OPTIMIZATION

Greedy Step Averaging: A parameter-free stochastic optimization method

11 Nov 2016TalkingData/Fregata

In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems.

STOCHASTIC OPTIMIZATION

Deep learning with Elastic Averaging SGD

NeurIPS 2015 cerndb/dist-keras

We empirically demonstrate that in the deep learning setting, due to the existence of many local optima, allowing more exploration can lead to the improved performance.

IMAGE CLASSIFICATION STOCHASTIC OPTIMIZATION

Averaging Weights Leads to Wider Optima and Better Generalization

14 Mar 2018timgaripov/swa

Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence.

IMAGE CLASSIFICATION STOCHASTIC OPTIMIZATION

DeepType: Multilingual Entity Linking by Neural Type System Evolution

3 Feb 2018openai/deeptype

The wealth of structured (e. g. Wikidata) and unstructured data about the world available today presents an incredible opportunity for tomorrow's Artificial Intelligence.

ENTITY EMBEDDINGS ENTITY LINKING STOCHASTIC OPTIMIZATION