Search Results for author: Tuan Hang Nguyen

Found 3 papers, 2 papers with code

Asymptotic behaviour of learning rates in Armijo's condition

no code implementations7 Jul 2020 Tuyen Trung Truong, Tuan Hang Nguyen

This complements the first author's results on Unbounded Backtracking GD, and shows that in case of convergence to a non-degenerate critical point the behaviour of Unbounded Backtracking GD is not too different from that of usual Backtracking GD.

A fast and simple modification of Newton's method helping to avoid saddle points

1 code implementation2 Jun 2020 Tuyen Trung Truong, Tat Dat To, Tuan Hang Nguyen, Thu Hang Nguyen, Hoang Phuong Nguyen, Maged Helmy

The main result of this paper roughly says that if $f$ is $C^3$ (can be unbounded from below) and a sequence $\{x_n\}$, constructed by the New Q-Newton's method from a random initial point $x_0$, {\bf converges}, then the limit point is a critical point and is not a saddle point, and the convergence rate is the same as that of Newton's method.

Protein Folding Stochastic Optimization

Backtracking gradient descent method for general $C^1$ functions, with applications to Deep Learning

1 code implementation15 Aug 2018 Tuyen Trung Truong, Tuan Hang Nguyen

Then either $\lim _{n\rightarrow\infty}||z_n||=\infty$ or $\{z_n\}$ converges to a critical point of $f$.

Cannot find the paper you are looking for? You can Submit a new open access paper.