Search Results for author: Tuyen Trung Truong

Found 16 papers, 4 papers with code

Backtracking New Q-Newton's method, Newton's flow, Voronoi's diagram and Stochastic root finding

no code implementations2 Jan 2024 John Erik Fornaess, Mi Hu, Tuyen Trung Truong, Takayuki Watanabe

A new variant of Newton's method - named Backtracking New Q-Newton's method (BNQN) - which has strong theoretical guarantee, is easy to implement, and has good experimental performance, was recently introduced by the third author.

Creating walls to avoid unwanted points in root finding and optimization

no code implementations20 Sep 2023 Tuyen Trung Truong

Assume that one already has a method IM for optimization (and root finding) for non-constrained optimization.

CapillaryX: A Software Design Pattern for Analyzing Medical Images in Real-time using Deep Learning

no code implementations13 Apr 2022 Maged Abdalla Helmy Abdou, Paulo Ferreira, Eric Jul, Tuyen Trung Truong

This paper provides a computing architecture that locally and in parallel can analyze medical images in real-time using deep learning thus avoiding the legal and privacy challenges stemming from uploading data to a third-party cloud provider.

Generalisations and improvements of New Q-Newton's method Backtracking

no code implementations23 Sep 2021 Tuyen Trung Truong

In New Q-Newton's method Backtracking, the choices are $\tau =1+\alpha >1$ and $e_1(x),\ldots , e_m(x)$'s are eigenvectors of $\nabla ^2f(x)$.

New Q-Newton's method meets Backtracking line search: good convergence guarantee, saddle points avoidance, quadratic rate of convergence, and easy implementation

1 code implementation23 Aug 2021 Tuyen Trung Truong

While good theoretical convergence guarantee has not been established for this method, experiments on small scale problems show that the method works very competitively against other well known modifications of Newton's method such as Adaptive Cubic Regularization and BFGS, as well as first order methods such as Unbounded Two-way Backtracking Gradient Descent.

CapillaryNet: An Automated System to Quantify Skin Capillary Density and Red Blood Cell Velocity from Handheld Vital Microscopy

1 code implementation23 Apr 2021 Maged Helmy, Anastasiya Dykyy, Tuyen Trung Truong, Paulo Ferreira, Eric Jul

Thus, manual analysis has been reported to hinder the application of microvascular microscopy in a clinical environment.

A dynamical approach to generalized Weil's Riemann hypothesis and semisimplicity

no code implementations8 Feb 2021 Fei Hu, Tuyen Trung Truong

As an application, we obtain new results on the DDC conjecture for abelian varieties and Kummer surfaces, and the generalized semisimplicity conjecture for Kummer surfaces.

Algebraic Geometry Dynamical Systems Number Theory 14G17, 37P25, 14K05, 14J28, 14C25, 14F20

Unconstrained optimisation on Riemannian manifolds

no code implementations25 Aug 2020 Tuyen Trung Truong

In this paper, we give explicit descriptions of versions of (Local-) Backtracking Gradient Descent and New Q-Newton's method to the Riemannian setting. Here are some easy to state consequences of results in this paper, where X is a general Riemannian manifold of finite dimension and $f:X\rightarrow \mathbb{R}$ a $C^2$ function which is Morse (that is, all its critical points are non-degenerate).

Asymptotic behaviour of learning rates in Armijo's condition

no code implementations7 Jul 2020 Tuyen Trung Truong, Tuan Hang Nguyen

This complements the first author's results on Unbounded Backtracking GD, and shows that in case of convergence to a non-degenerate critical point the behaviour of Unbounded Backtracking GD is not too different from that of usual Backtracking GD.

A fast and simple modification of Newton's method helping to avoid saddle points

1 code implementation2 Jun 2020 Tuyen Trung Truong, Tat Dat To, Tuan Hang Nguyen, Thu Hang Nguyen, Hoang Phuong Nguyen, Maged Helmy

The main result of this paper roughly says that if $f$ is $C^3$ (can be unbounded from below) and a sequence $\{x_n\}$, constructed by the New Q-Newton's method from a random initial point $x_0$, {\bf converges}, then the limit point is a critical point and is not a saddle point, and the convergence rate is the same as that of Newton's method.

Protein Folding Stochastic Optimization

Coordinate-wise Armijo's condition: General case

no code implementations11 Mar 2020 Tuyen Trung Truong

For a point $(x, y) \in \mathbb{R}^{m_1}\times \mathbb{R}^{m_2}$, a number $\delta >0$ satisfies Armijo's condition at $(x, y)$ if the following inequality holds: \begin{eqnarray*} f(x-\delta \partial _xf, y-\delta \partial _yf)-f(x, y)\leq -\alpha \delta (||\partial _xf||^2+||\partial _yf||^2).

Some convergent results for Backtracking Gradient Descent method on Banach spaces

no code implementations16 Jan 2020 Tuyen Trung Truong

Let $X$ be a reflexive, complete Banach space and $f:X\rightarrow \mathbb{R}$ be a $C^2$ function which satisfies Condition C. Moreover, we assume that for every bounded set $S\subset X$, then $\sup _{x\in S}||\nabla ^2f(x)||<\infty$.

Backtracking Gradient Descent allowing unbounded learning rates

no code implementations7 Jan 2020 Tuyen Trung Truong

In this paper, we allow the learning rates $\delta _n$ to be unbounded, in the sense that there is a function $h:(0,\infty)\rightarrow (0,\infty )$ such that $\lim _{t\rightarrow 0}th(t)=0$ and $\delta _n\lesssim \max \{h(x_n),\delta \}$ satisfies Armijo's condition for all $n$, and prove convergence under the same assumptions as in the mentioned paper.

Coordinate-wise Armijo's condition

no code implementations18 Nov 2019 Tuyen Trung Truong

For a point $(x, y) \in \mathbb{R}^{m_1}\times \mathbb{R}^{m_2}$, a number $\delta >0$ satisfies Armijo's condition at $(x, y)$ if the following inequality holds: \begin{eqnarray*} f(x-\delta \partial _xf, y-\delta \partial _yf)-f(x, y)\leq -\alpha \delta (||\partial _xf||^2+||\partial _yf||^2).

Convergence to minima for the continuous version of Backtracking Gradient Descent

no code implementations11 Nov 2019 Tuyen Trung Truong

(iii) There is a set $\mathcal{E}_1\subset \mathbb{R}^k$ of Lebesgue measure $0$ so that for all $x_0\in \mathbb{R}^k\backslash \mathcal{E}_1$, the sequence $x_{n+1}=H(x_n)$, {\bf if converges}, cannot converge to a {\bf generalised} saddle point.

Backtracking gradient descent method for general $C^1$ functions, with applications to Deep Learning

1 code implementation15 Aug 2018 Tuyen Trung Truong, Tuan Hang Nguyen

Then either $\lim _{n\rightarrow\infty}||z_n||=\infty$ or $\{z_n\}$ converges to a critical point of $f$.

Cannot find the paper you are looking for? You can Submit a new open access paper.