Search Results for author: Xunpeng Huang

Found 10 papers, 2 papers with code

An Improved Analysis of Langevin Algorithms with Prior Diffusion for Non-Log-Concave Sampling

no code implementations10 Mar 2024 Xunpeng Huang, Hanze Dong, Difan Zou, Tong Zhang

Along this line, Freund et al. (2022) suggest that the modified Langevin algorithm with prior diffusion is able to converge dimension independently for strongly log-concave target distributions.

Faster Sampling without Isoperimetry via Diffusion-based Monte Carlo

no code implementations12 Jan 2024 Xunpeng Huang, Difan Zou, Hanze Dong, Yian Ma, Tong Zhang

Specifically, DMC follows the reverse SDE of a diffusion process that transforms the target distribution to the standard Gaussian, utilizing a non-parametric score estimation.

Reverse Diffusion Monte Carlo

no code implementations5 Jul 2023 Xunpeng Huang, Hanze Dong, Yifan Hao, Yi-An Ma, Tong Zhang

We propose a Monte Carlo sampler from the reverse diffusion process.

Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates

no code implementations19 May 2022 Jingwei Zhang, Xunpeng Huang

We consider optimizing two-layer neural networks in the mean-field regime where the learning dynamics of network weights can be approximated by the evolution in the space of probability measures over the weight parameters associated with the neurons.

Adaptive Gradient Methods Can Be Provably Faster than SGD with Random Shuffling

no code implementations1 Jan 2021 Xunpeng Huang, Vicky Jiaqi Zhang, Hao Zhou, Lei LI

Adaptive gradient methods have been shown to outperform SGD in many tasks of training neural networks.

ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization

1 code implementation12 Jun 2020 Xunpeng Huang, Runxin Xu, Hao Zhou, Zhe Wang, Zhengyang Liu, Lei LI

Due to its simplicity and outstanding ability to generalize, stochastic gradient descent (SGD) is still the most widely used optimization method despite its slow convergence.

BIG-bench Machine Learning Stochastic Optimization

Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs

no code implementations12 Jun 2020 Xunpeng Huang, Hao Zhou, Runxin Xu, Zhe Wang, Lei LI

Adaptive gradient methods have attracted much attention of machine learning communities due to the high efficiency.

SPAN: A Stochastic Projected Approximate Newton Method

no code implementations10 Feb 2020 Xunpeng Huang, Xianfeng Liang, Zhengyang Liu, Yitan Li, Linyun Yu, Yue Yu, Lei LI

SPAN computes the inverse of the Hessian matrix via low-rank approximation and stochastic Hessian-vector products.

Acutum: When Generalization Meets Adaptability

no code implementations25 Sep 2019 Xunpeng Huang, Zhengyang Liu, Zhe Wang, Yue Yu, Lei LI

To the best of our knowledge, Acutum is the first adaptive gradient method without second moments.

BIG-bench Machine Learning

Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective

2 code implementations11 Nov 2017 Junliang Guo, Linli Xu, Xunpeng Huang, Enhong Chen

In this paper, we take a matrix factorization perspective of network embedding, and incorporate structure, content and label information of the network simultaneously.

Link Prediction Network Embedding +1

Cannot find the paper you are looking for? You can Submit a new open access paper.