no code implementations • ICML 2020 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
no code implementations • ECCV 2020 • Chi Xu, Yasushi Makihara, Xiang Li, Yasushi Yagi, Jianfeng Lu
Specifically, a phase estimation network is introduced for the input single image, and the gait cycle reconstruction network exploits the estimated phase to mitigate the dependence of an encoded feature on the phase of that single image.
no code implementations • 31 Jan 2022 • Mo Zhou, Jianfeng Lu
We propose a single time-scale actor-critic algorithm to solve the linear quadratic regulator (LQR) problem.
no code implementations • 25 Jan 2022 • Ziang Chen, Jianfeng Lu, Yulong Lu, Shengxuan Zhou
Spectral Barron spaces have received considerable interest recently as it is the natural function space for approximation theory of two-layer neural networks with a dimension-free convergence rate.
no code implementations • ICLR 2022 • Yiping Lu, Haoxuan Chen, Jianfeng Lu, Lexing Ying, Jose Blanchet
In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs).
no code implementations • NeurIPS Workshop DLDE 2021 • Yiping Lu, Haoxuan Chen, Jianfeng Lu, Lexing Ying, Jose Blanchet
In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs).
no code implementations • NeurIPS 2021 • Ziang Chen, Jianfeng Lu, Yulong Lu
Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments.
no code implementations • 4 May 2021 • Jianfeng Lu, Yulong Lu
We prove that the convergence rate of the generalization error is independent of the dimension $d$, under the a priori assumption that the ground state lies in a spectral Barron space.
no code implementations • 10 Mar 2021 • Peng Wan, Zhenbo Song, Jianfeng Lu
In this paper, we present a novel end-to-end deep neural network model for autonomous driving that takes monocular image sequence as input, and directly generates the steering control angle.
1 code implementation • 9 Feb 2021 • Alec J. Coffman, Jianfeng Lu, Joseph E. Subotnik
We present a new computational approach to simulate linear sweep and cyclic voltammetry experiments that does not require a discretized grid in space to quantify diffusion.
Chemical Physics
no code implementations • 7 Jan 2021 • Jianfeng Lu, Kevin D. Stubbs
In two and three spatial dimensions, it is well understood for periodic insulators that exponentially-localized Wannier functions exist if and only if there exists an orthonormal basis for the Fermi projection with finite second moment (i. e. all basis elements satisfy $\int |\boldsymbol{x}|^2 |w(\boldsymbol{x})|^2 \,\text{d}{\boldsymbol{x}} < \infty$).
Mathematical Physics Mesoscale and Nanoscale Physics Mathematical Physics
no code implementations • 5 Jan 2021 • Jianfeng Lu, Yulong Lu, Min Wang
This paper concerns the a priori generalization analysis of the Deep Ritz Method (DRM) [W. E and B. Yu, 2017], a popular neural-network-based method for solving high dimensional partial differential equations.
no code implementations • 21 Dec 2020 • Jianfeng Lu, Lihan Wang
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions.
no code implementations • 15 Dec 2020 • Jianfeng Lu, Stefan Steinerberger
We consider the variational problem of cross-entropy loss with $n$ feature vectors on a unit hypersphere in $\mathbb{R}^d$.
no code implementations • 3 Dec 2020 • Fengchao Xiong, Shuyin Tao, Jun Zhou, Jianfeng Lu, Jiantao Zhou, Yuntao Qian
This model first projects the observed HSIs into a low-dimensional orthogonal subspace, and then represents the projected image with a multidimensional dictionary.
no code implementations • ICLR 2021 • Andrea Agazzi, Jianfeng Lu
We study the problem of policy optimization for infinite-horizon discounted Markov Decision Processes with softmax policy and nonlinear function approximation trained with policy gradient algorithms.
no code implementations • 22 Oct 2020 • Zhiyan Ding, Qin Li, Jianfeng Lu, Stephen J. Wright
We investigate the computational complexity of RC-ULMC and compare it with the classical ULMC for strongly log-concave probability distributions.
no code implementations • 3 Oct 2020 • Zhiyan Ding, Qin Li, Jianfeng Lu, Stephen J. Wright
We investigate the total complexity of RC-LMC and compare it with the classical LMC for log-concave probability distributions.
no code implementations • 30 Sep 2020 • Rong Ge, Holden Lee, Jianfeng Lu, Andrej Risteski
We give a algorithm for exact sampling from the Bingham distribution $p(x)\propto \exp(x^\top A x)$ on the sphere $\mathcal S^{d-1}$ with expected runtime of $\operatorname{poly}(d, \lambda_{\max}(A)-\lambda_{\min}(A))$.
1 code implementation • 21 Jul 2020 • Kaitao Song, Xu Tan, Jianfeng Lu
Neural machine translation (NMT) generates the next target token given as input the previous ground truth target tokens during training while the previous generated target tokens during inference, which causes discrepancy between training and inference as well as error propagation, and affects the translation accuracy.
1 code implementation • 7 Jun 2020 • Zhenbo Song, Jianfeng Lu, Tong Zhang, Hongdong Li
In this paper, we propose a monocular camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
no code implementations • 27 Apr 2020 • Kaitao Song, Hao Sun, Xu Tan, Tao Qin, Jianfeng Lu, Hongzhi Liu, Tie-Yan Liu
While pre-training and fine-tuning, e. g., BERT~\citep{devlin2018bert}, GPT-2~\citep{radford2019language}, have achieved great success in language understanding and generation tasks, the pre-trained models are usually too big for online deployment in terms of both memory cost and inference speed, which hinders them from practical online usage.
6 code implementations • NeurIPS 2020 • Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu
Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for pre-training to address this problem.
no code implementations • NeurIPS 2020 • Yulong Lu, Jianfeng Lu
In particular, the size of neural network can grow exponentially in $d$ when $1$-Wasserstein distance is used as the discrepancy, whereas for both MMD and KSD the size of neural network only depends on $d$ at most polynomially.
no code implementations • 11 Mar 2020 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a new continuum limit of deep residual networks, which enjoys a good landscape in the sense that every local minimizer is global.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
no code implementations • 7 Feb 2020 • Jiequn Han, Jianfeng Lu, Mo Zhou
We propose a new method to solve eigenvalue problems for linear and semilinear second order differential operators in high dimensions based on deep neural networks.
no code implementations • 9 Jan 2020 • Jianfeng Lu, Zuowei Shen, Haizhao Yang, Shijun Zhang
This paper establishes the (nearly) optimal approximation error characterization of deep rectified linear unit (ReLU) networks for smooth functions in terms of both width and depth simultaneously.
no code implementations • 11 Nov 2019 • Ya Sun, Minxian Li, Jianfeng Lu
We can easily measure the similarity of two vehicle images by computing the Euclidean distance of the features from FC layer.
no code implementations • 8 Nov 2019 • Rong Ge, Holden Lee, Jianfeng Lu
Estimating the normalizing constant of an unnormalized probability distribution has important applications in computer science, statistical physics, machine learning, and statistics.
no code implementations • 25 Sep 2019 • Andrea Agazzi, Jianfeng Lu
We then give examples of such convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.
no code implementations • 27 May 2019 • Andrea Agazzi, Jianfeng Lu
We finally give examples of our convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.
no code implementations • 23 May 2019 • Yulong Lu, Jianfeng Lu, James Nolen
A fundamental problem in Bayesian inference and statistical machine learning is to efficiently sample from multimodal distributions.
no code implementations • 7 May 2019 • Yingzhou Li, Jianfeng Lu, Anqi Mao
A novel solve-training framework is proposed to train neural network in representing low dimensional solution maps of physical models.
5 code implementations • 7 May 2019 • Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu
Pre-training and fine-tuning, e. g., BERT, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks.
no code implementations • 18 Mar 2019 • Ping Yu, Kaitao Song, Jianfeng Lu
Recently, deep neural networks have significant progress and successful application in various fields, but they are found vulnerable to attack instances, e. g., adversarial examples.
1 code implementation • 12 Feb 2019 • Zhe Wang, Yingzhou Li, Jianfeng Lu
We develop an efficient algorithm, coordinate descent FCI (CDFCI), for the electronic structure ground state calculation in the configuration interaction framework.
Chemical Physics Computational Physics
no code implementations • 9 Feb 2019 • Lei Li, Yingzhou Li, Jian-Guo Liu, Zibu Liu, Jianfeng Lu
We propose in this work RBM-SVGD, a stochastic version of Stein Variational Gradient Descent (SVGD) method for efficiently sampling from a given probability measure and thus useful for Bayesian inference.
no code implementations • journal 2019 • Zar Nawab Khan Swati, Qinghua Zhao3, Muhammad Kabir, Farman Ali, Ali Zakir, Saeed Ahmad, Jianfeng Lu
It is necessary to design a feature extraction framework to reduce this gap without using handcrafted features by encoding/combining low-level and high-level features.
no code implementations • 15 Dec 2018 • Lian-Tao Wang, Qingwu Li, Jianfeng Lu
In this paper, we propose a voting scheme involving not only the definite negative instances but also the ambiguous positive instances to make use of the extra useful information in the weakly labelled positive bags.
no code implementations • 1 Nov 2018 • Kaitao Song, Xu Tan, Furong Peng, Jianfeng Lu
The encoder-decoder is the typical framework for Neural Machine Translation (NMT), and different structures have been developed for improving the translation performance.
no code implementations • ECCV 2018 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, Anton Van Den Hengel
Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge.
1 code implementation • ECCV 2018 • Xiaofeng Han, Chuong Nguyen, ShaoDi You, Jianfeng Lu
Water bodies, such as puddles and flooded areas, on and off road pose significant risks to autonomous cars.
1 code implementation • COLING 2018 • Kaitao Song, Xu Tan, Di He, Jianfeng Lu, Tao Qin, Tie-Yan Liu
In this work we propose Double Path Networks for Sequence to Sequence learning (DPN-S2S), which leverage the advantages of both models by using double path information fusion.
no code implementations • 21 May 2018 • Jing An, Jianfeng Lu, Lexing Ying
The resulting SME of Langevin type extracts more information about the ASGD dynamics and elucidates the relationship between different types of stochastic gradient algorithms.
1 code implementation • 18 May 2018 • Yingzhou Li, Xiuyuan Cheng, Jianfeng Lu
Theoretical analysis of the approximation power of Butterfly-Net to the Fourier representation of input data shows that the error decays exponentially as the depth increases.
no code implementations • ICLR 2019 • Wei Zhu, Qiang Qiu, Bao Wang, Jianfeng Lu, Guillermo Sapiro, Ingrid Daubechies
Deep neural networks (DNNs) typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed.
no code implementations • 10 May 2018 • Jianfeng Lu, Yulong Lu, James Nolen
We study an interacting particle system in $\mathbf{R}^d$ motivated by Stein variational gradient descent [Q. Liu and D. Wang, NIPS 2016], a deterministic algorithm for sampling from a given probability density with unknown normalization.
no code implementations • 28 Feb 2018 • Yuehaw Khoo, Jianfeng Lu, Lexing Ying
In this note we propose a method based on artificial neural network to study the transition between states governed by stochastic processes.
no code implementations • 21 Nov 2017 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, Anton Van Den Hengel
Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge.
no code implementations • 19 Nov 2017 • Jun-Jie Zhang, Qi Wu, Jian Zhang, Chunhua Shen, Jianfeng Lu
These comments can be a description of the image, or some objects, attributes, scenes in it, which are normally used as the user-provided tags.
1 code implementation • 11 Jul 2017 • Yuehaw Khoo, Jianfeng Lu, Lexing Ying
The representability of such quantity using a neural-network can be justified by viewing the neural-network as performing time evolution to find the solutions to the PDE.
Numerical Analysis 65Nxx
1 code implementation • 23 May 2017 • Akihiko Nishimura, David Dunson, Jianfeng Lu
Hamiltonian Monte Carlo has emerged as a standard tool for posterior computation.
Computation
no code implementations • 4 Dec 2016 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu
Recent state-of-the-art approaches to multi-label image classification exploit the label dependencies in an image, at global level, largely improving the labeling capacity.
1 code implementation • 8 Aug 2014 • Jianfeng Lu, Christian B. Mendl
We develop an efficient algorithm for a spatially inhomogeneous matrix-valued quantum Boltzmann equation derived from the Hubbard model.
Computational Physics Mesoscale and Nanoscale Physics