Search Results for author: Shahrzad Mahboubi

Found 6 papers, 0 papers with code

A modified limited memory Nesterov's accelerated quasi-Newton

no code implementations1 Dec 2021 S. Indrapriyadarsini, Shahrzad Mahboubi, Hiroshi Ninomiya, Takeshi Kamio, Hideki Asai

The Nesterov's accelerated quasi-Newton (L)NAQ method has shown to accelerate the conventional (L)BFGS quasi-Newton method using the Nesterov's accelerated gradient in several neural network (NN) applications.

A Nesterov's Accelerated quasi-Newton method for Global Routing using Deep Reinforcement Learning

no code implementations15 Oct 2020 S. Indrapriyadarsini, Shahrzad Mahboubi, Hiroshi Ninomiya, Takeshi Kamio, Hideki Asai

Deep Q-learning method is one of the most popularly used deep reinforcement learning algorithms which uses deep neural networks to approximate the estimation of the action-value function.

Q-Learning reinforcement-learning +1

Implementation of a modified Nesterov's Accelerated quasi-Newton Method on Tensorflow

no code implementations21 Oct 2019 S. Indrapriyadarsini, Shahrzad Mahboubi, Hiroshi Ninomiya, Hideki Asai

The Nesterov's Accelerated Quasi-Newton (NAQ) method has shown to drastically improve the convergence speed compared to the conventional quasi-Newton method.

Second-order methods

A Stochastic Variance Reduced Nesterov's Accelerated Quasi-Newton Method

no code implementations17 Oct 2019 Sota Yasuda, Shahrzad Mahboubi, S. Indrapriyadarsini, Hiroshi Ninomiya, Hideki Asai

This paper proposes a stochastic variance reduced Nesterov's Accelerated Quasi-Newton method in full (SVR-NAQ) and limited (SVRLNAQ) memory forms.

regression

An Adaptive Stochastic Nesterov Accelerated Quasi Newton Method for Training RNNs

no code implementations9 Sep 2019 S. Indrapriyadarsini, Shahrzad Mahboubi, Hiroshi Ninomiya, Hideki Asai

A common problem in training neural networks is the vanishing and/or exploding gradient problem which is more prominently seen in training of Recurrent Neural Networks (RNNs).

A Stochastic Quasi-Newton Method with Nesterov's Accelerated Gradient

no code implementations9 Sep 2019 S. Indrapriyadarsini, Shahrzad Mahboubi, Hiroshi Ninomiya, Hideki Asai

Incorporating second order curvature information in gradient based methods have shown to improve convergence drastically despite its computational intensity.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.