Search Results for author: Jian-Guo Liu

Found 7 papers, 1 papers with code

Existence and incompressible limit of a tissue growth model with autophagy

no code implementations7 Feb 2021 Jian-Guo Liu, Xiangsheng Xu

In this paper we study a cross-diffusion system whose coefficient matrix is non-symmetric and degenerate.

Analysis of PDEs

Data-driven Efficient Solvers for Langevin Dynamics on Manifold in High Dimensions

no code implementations22 May 2020 Yuan Gao, Jian-Guo Liu, Nan Wu

To construct an efficient and stable approximation for the Langevin dynamics on $\mathcal{N}$, we leverage the corresponding Fokker-Planck equation on the manifold $\mathcal{N}$ in terms of the reaction coordinates $\mathsf{y}$.

Breather wave and lump-type solutions of new (3+1)-dimensional Boiti-Leon-Manna-Pempinelli equation in incompressible fluid

no code implementations14 Feb 2020 Jian-Guo Liu, Abdul-Majid Wazwaz

Under investigation is a new (3+1)-dimensional Boiti-Leon-Manna-Pempinelli equation.

Pattern Formation and Solitons Mathematical Physics Mathematical Physics Exactly Solvable and Integrable Systems

A stochastic version of Stein Variational Gradient Descent for efficient sampling

no code implementations9 Feb 2019 Lei Li, Yingzhou Li, Jian-Guo Liu, Zibu Liu, Jianfeng Lu

We propose in this work RBM-SVGD, a stochastic version of Stein Variational Gradient Descent (SVGD) method for efficiently sampling from a given probability measure and thus useful for Bayesian inference.

Bayesian Inference

Uniform-in-Time Weak Error Analysis for Stochastic Gradient Descent Algorithms via Diffusion Approximation

no code implementations2 Feb 2019 Yuanyuan Feng, Tingran Gao, Lei LI, Jian-Guo Liu, Yulong Lu

Diffusion approximation provides weak approximation for stochastic gradient descent algorithms in a finite time horizon.

Stochastic Optimization

On the diffusion approximation of nonconvex stochastic gradient descent

no code implementations22 May 2017 Wenqing Hu, Chris Junchi Li, Lei LI, Jian-Guo Liu

In addition, we discuss the effects of batch size for the deep neural networks, and we find that small batch size is helpful for SGD algorithms to escape unstable stationary points and sharp minimizers.

Superpixel Segmentation Using Gaussian Mixture Model

1 code implementation28 Dec 2016 Zhihua Ban, Jian-Guo Liu, Li Cao

Based on this assumption, each pixel is supposed to be drawn from a mixture of Gaussian distributions with unknown parameters (GMM).

Cannot find the paper you are looking for? You can Submit a new open access paper.