Search Results for author: Xiaopeng Luo

Found 8 papers, 4 papers with code

A New Accelerated Stochastic Gradient Method with Momentum

no code implementations31 May 2020 Liang Liu, Xiaopeng Luo

In this paper, we propose a novel accelerated stochastic gradient method with momentum, which momentum is the weighted average of previous gradients.

Asymptotic proximal point methods: finding the global minima with linear convergence for a class of multiple minima problems

1 code implementation5 Apr 2020 Xiaopeng Luo, Xin Xu

We propose and analyze asymptotic proximal point (APP) methods to find the global minimizer for a class of nonconvex, nonsmooth, or even discontinuous multiple minima functions.

Optimization and Control Numerical Analysis Numerical Analysis 65K05, 68Q25, 90C26, 90C56

Can speed up the convergence rate of stochastic gradient methods to $\mathcal{O}(1/k^2)$ by a gradient averaging strategy?

no code implementations25 Feb 2020 Xin Xu, Xiaopeng Luo

In this paper we consider the question of whether it is possible to apply a gradient averaging strategy to improve on the sublinear convergence rates without any increase in storage.

Stochastic gradient-free descents

no code implementations31 Dec 2019 Xiaopeng Luo, Xin Xu

In this paper we propose stochastic gradient-free methods and accelerated methods with momentum for solving stochastic optimization problems.

Stochastic Optimization

Contraction methods for continuous optimization

1 code implementation3 Sep 2019 Xiaopeng Luo, Xin Xu

Motivated by the grid search method and Bayesian optimization, we introduce the concept of contractibility and its applications in model-based optimization.

Optimization and Control Computational Complexity Numerical Analysis Numerical Analysis 65K05, 68Q15, 90C26, 90C56

Sparse residual tree and forest

no code implementations18 Feb 2019 Xin Xu, Xiaopeng Luo

The hierarchical parallel SRT algorithm is based on both tree decomposition and adaptive radial basis function (RBF) explorations, whereby for each child a sparse and proper RBF refinement is added to the approximation by minimizing the norm of the residual inherited from its parent.

Tree Decomposition

Minima distribution for global optimization

1 code implementation9 Dec 2018 Xiaopeng Luo

This paper establishes a strict mathematical relationship between an arbitrary continuous function on a compact set and its global minima, like the well-known first order optimality condition for convex and differentiable functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.