no code implementations • ECCV 2020 • Chi Xu, Yasushi Makihara, Xiang Li, Yasushi Yagi, Jianfeng Lu
Specifically, a phase estimation network is introduced for the input single image, and the gait cycle reconstruction network exploits the estimated phase to mitigate the dependence of an encoded feature on the phase of that single image.
no code implementations • ICML 2020 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
no code implementations • NeurIPS 2023 • Tanya Marwah, Ashwini Pokle, J. Zico Kolter, Zachary C. Lipton, Jianfeng Lu, Andrej Risteski
Motivated by this observation, we propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE as the infinite-depth fixed point of an implicit operator layer using a black-box root solver and differentiates analytically through this fixed point resulting in $\mathcal{O}(1)$ training memory.
no code implementations • 26 Oct 2023 • Xiuyuan Cheng, Jianfeng Lu, Yixin Tan, Yao Xie
Flow-based generative models enjoy certain advantages in computing the data generation and the likelihood, and have recently shown competitive empirical performance.
no code implementations • 10 Oct 2023 • Xiangyu Wu, Yi Gao, Hailiang Zhang, Yang Yang, Weili Guo, Jianfeng Lu
In this paper, we present our solution to the New frontiers for Zero-shot Image Captioning Challenge.
no code implementations • 10 Oct 2023 • Xiangyu Wu, Yang Yang, Shengdong Xu, Yifeng Wu, QingGuo Chen, Jianfeng Lu
At the data level, inspired by the challenge paper, we categorized the whole questions into eight types and utilized the llama-2-chat model to directly generate the type for each question in a zero-shot manner.
no code implementations • 19 Sep 2023 • Luke Triplett, Jianfeng Lu
In this work, we seek to simulate rare transitions between metastable states using score-based generative models.
no code implementations • 8 Sep 2023 • Tianmin Yu, Shixin Zheng, Jianfeng Lu, Govind Menon, Xiangxiong Zhang
This paper introduces two explicit schemes to sample matrices from Gibbs distributions on $\mathcal S^{n, p}_+$, the manifold of real positive semi-definite (PSD) matrices of size $n\times n$ and rank $p$.
no code implementations • 5 Sep 2023 • TaeHoon Kim, Pyunghwan Ahn, Sangyun Kim, Sihaeng Lee, Mark Marsden, Alessandra Sala, Seung Hwan Kim, Bohyung Han, Kyoung Mu Lee, Honglak Lee, Kyounghoon Bae, Xiangyu Wu, Yi Gao, Hailiang Zhang, Yang Yang, Weili Guo, Jianfeng Lu, Youngtaek Oh, Jae Won Cho, Dong-Jin Kim, In So Kweon, Junmo Kim, Wooyoung Kang, Won Young Jhoo, Byungseok Roh, Jonghwan Mun, Solgil Oh, Kenan Emir Ak, Gwang-Gook Lee, Yan Xu, Mingwei Shen, Kyomin Hwang, Wonsik Shin, Kamin Lee, Wonhark Park, Dongkwan Lee, Nojun Kwak, Yujin Wang, Yimu Wang, Tiancheng Gu, Xingchang Lv, Mingmao Sun
In this report, we introduce NICE (New frontiers for zero-shot Image Captioning Evaluation) project and share the results and outcomes of 2023 challenge.
1 code implementation • 1 Aug 2023 • Zhenyuan Zhang, Zhenbo Song, Kaihao Zhang, Wenhan Luo, Zhaoxin Fan, Jianfeng Lu
To the best of our knowledge, these two datasets are the first largest-scale UHD datasets for SIRR.
no code implementations • 13 Jul 2023 • Shijun Zhang, Jianfeng Lu, Hongkai Zhao
This paper explores the expressive power of deep neural networks for a diverse range of activation functions.
no code implementations • 23 May 2023 • Dong Wei, Xiaoning Sun, Huaijiang Sun, Bin Li, Shengxiang Hu, Weiqing Li, Jianfeng Lu
The emergence of text-driven motion synthesis technique provides animators with great potential to create efficiently.
no code implementations • 6 May 2023 • Lezhi Tan, Jianfeng Lu
Aiming at multimodality, we propose a new sampling method that takes advantage of both birth-death process and exploration component.
no code implementations • 21 Apr 2023 • Jianfeng Lu, Yue Wu, Yang Xiang
We use the score-based transport modeling method to solve the mean-field Fokker-Planck equations, which we call MSBTM.
no code implementations • 18 Apr 2023 • Jing An, Jianfeng Lu
We extend the global convergence result of Chatterjee \cite{chatterjee2022convergence} by considering the stochastic gradient descent (SGD) for non-convex objective functions.
no code implementations • 13 Apr 2023 • Qiongjie Cui, Huaijiang Sun, Jianfeng Lu, Bin Li, Weiqing Li
Predicting high-fidelity future human poses, from a historically observed sequence, is decisive for intelligent robots to interact with humans.
no code implementations • CVPR 2023 • Xiaoning Sun, Huaijiang Sun, Bin Li, Dong Wei, Weiqing Li, Jianfeng Lu
Let us rethink the real-world scenarios that require human motion prediction techniques, such as human-robot collaboration.
no code implementations • 12 Mar 2023 • Andrea Agazzi, Jianfeng Lu, Sayan Mukherjee
We analyze Elman-type Recurrent Reural Networks (RNNs) and their training in the mean-field regime.
no code implementations • 11 Feb 2023 • Mo Zhou, Jianfeng Lu
We consider policy gradient methods for stochastic optimal control problem in continuous time.
no code implementations • 29 Jan 2023 • Shijun Zhang, Jianfeng Lu, Hongkai Zhao
This paper explores the expressive power of deep neural networks through the framework of function compositions.
no code implementations • ICCV 2023 • Qiongjie Cui, Huaijiang Sun, Jianfeng Lu, Weiqing Li, Bin Li, Hongwei Yi, Haofan Wang
Current motion forecasting approaches typically train a deep end-to-end model from the source domain data, and then apply it directly to target subjects.
1 code implementation • CVPR 2023 • Zhenbo Song, Zhenyuan Zhang, Kaihao Zhang, Wenhan Luo, Zhaoxin Fan, Wenqi Ren, Jianfeng Lu
This paper addresses the problem of robust deep single-image reflection removal (SIRR) against adversarial attacks.
Ranked #1 on
Reflection Removal
on Real20
no code implementations • 1 Dec 2022 • Zhengwang Xia, Tao Zhou, Saqib Mamoon, Amani Alfakih, Jianfeng Lu
Brain network provides important insights for the diagnosis of many brain disorders, and how to effectively model the brain structure has become one of the core issues in the domain of brain imaging analysis.
no code implementations • 15 Nov 2022 • Ye He, Krishnakumar Balasubramanian, Bharath K. Sriperumbudur, Jianfeng Lu
In this work, we propose the Regularized Stein Variational Gradient Flow which interpolates between the Stein Variational Gradient Flow and the Wasserstein Gradient Flow.
no code implementations • 3 Nov 2022 • Hongrui Chen, Holden Lee, Jianfeng Lu
We give an improved theoretical analysis of score-based generative modeling.
no code implementations • 21 Oct 2022 • Tanya Marwah, Zachary C. Lipton, Jianfeng Lu, Andrej Risteski
We show that if composing a function with Barron norm $b$ with partial derivatives of $L$ produces a function of Barron norm at most $B_L b^p$, the solution to the PDE can be $\epsilon$-approximated in the $L^2$ sense by a function with Barron norm $O\left(\left(dB_L\right)^{\max\{p \log(1/ \epsilon), p^{\log(1/\epsilon)}\}}\right)$.
1 code implementation • 19 Oct 2022 • Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin
While Mixed-integer linear programming (MILP) is NP-hard in general, practical MILP has received roughly 100--fold speedup in the past twenty years.
no code implementations • 12 Oct 2022 • Dong Wei, Huaijiang Sun, Bin Li, Jianfeng Lu, Weiqing Li, Xiaoning Sun, Shengxiang Hu
This process offers a natural way to obtain the "whitened" latents without any trainable parameters, and human motion prediction can be regarded as the reverse diffusion process that converts the noise distribution into realistic future motions conditioned on the observed sequence.
no code implementations • 26 Sep 2022 • Holden Lee, Jianfeng Lu, Yixin Tan
Score-based generative modeling (SGM) has grown to be a hugely successful method for learning to generate samples from complex data distributions such as that of images and audio.
1 code implementation • 25 Sep 2022 • Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin
In particular, the graph neural network (GNN) is considered a suitable ML model for optimization problems whose variables and constraints are permutation--invariant, for example, the linear program (LP).
no code implementations • 25 Aug 2022 • Yang Jing, Jiaheng Chen, Lei LI, Jianfeng Lu
In this paper, we develop a deep learning framework to compute the geodesics under the spherical WFR metric, and the learned geodesics can be adopted to generate weighted samples.
1 code implementation • 19 Aug 2022 • Han Sun, Zhaoxin Fan, Zhenbo Song, Zhicheng Wang, Kejian Wu, Jianfeng Lu
The insight behind introducing MonoSIM is that we propose to simulate the feature learning behaviors of a point cloud based detector for monocular detector during the training period.
no code implementations • 2 Aug 2022 • Xiaoning Sun, Qiongjie Cui, Huaijiang Sun, Bin Li, Weiqing Li, Jianfeng Lu
Previous works on human motion prediction follow the pattern of building a mapping relation between the sequence observed and the one to be predicted.
no code implementations • 13 Jun 2022 • Holden Lee, Jianfeng Lu, Yixin Tan
Using our guarantee, we give a theoretical analysis of score-based generative modeling, which transforms white-noise input into samples from a learned data distribution given score estimates at different noise scales.
no code implementations • 31 Jan 2022 • Mo Zhou, Jianfeng Lu
We propose a single time-scale actor-critic algorithm to solve the linear quadratic regulator (LQR) problem.
no code implementations • 25 Jan 2022 • Ziang Chen, Jianfeng Lu, Yulong Lu, Shengxuan Zhou
Spectral Barron spaces have received considerable interest recently as it is the natural function space for approximation theory of two-layer neural networks with a dimension-free convergence rate.
no code implementations • ICLR 2022 • Yiping Lu, Haoxuan Chen, Jianfeng Lu, Lexing Ying, Jose Blanchet
In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs).
no code implementations • NeurIPS Workshop DLDE 2021 • Yiping Lu, Haoxuan Chen, Jianfeng Lu, Lexing Ying, Jose Blanchet
In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs).
no code implementations • NeurIPS 2021 • Ziang Chen, Jianfeng Lu, Yulong Lu
Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments.
no code implementations • 4 May 2021 • Jianfeng Lu, Yulong Lu
We prove that the convergence rate of the generalization error is independent of the dimension $d$, under the a priori assumption that the ground state lies in a spectral Barron space.
no code implementations • 10 Mar 2021 • Peng Wan, Zhenbo Song, Jianfeng Lu
In this paper, we present a novel end-to-end deep neural network model for autonomous driving that takes monocular image sequence as input, and directly generates the steering control angle.
1 code implementation • 9 Feb 2021 • Alec J. Coffman, Jianfeng Lu, Joseph E. Subotnik
We present a new computational approach to simulate linear sweep and cyclic voltammetry experiments that does not require a discretized grid in space to quantify diffusion.
Chemical Physics
no code implementations • 7 Jan 2021 • Jianfeng Lu, Kevin D. Stubbs
In two and three spatial dimensions, it is well understood for periodic insulators that exponentially-localized Wannier functions exist if and only if there exists an orthonormal basis for the Fermi projection with finite second moment (i. e. all basis elements satisfy $\int |\boldsymbol{x}|^2 |w(\boldsymbol{x})|^2 \,\text{d}{\boldsymbol{x}} < \infty$).
Mathematical Physics Mesoscale and Nanoscale Physics Mathematical Physics
no code implementations • 5 Jan 2021 • Jianfeng Lu, Yulong Lu, Min Wang
This paper concerns the a priori generalization analysis of the Deep Ritz Method (DRM) [W. E and B. Yu, 2017], a popular neural-network-based method for solving high dimensional partial differential equations.
no code implementations • 21 Dec 2020 • Jianfeng Lu, Lihan Wang
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions.
no code implementations • 15 Dec 2020 • Jianfeng Lu, Stefan Steinerberger
We consider the variational problem of cross-entropy loss with $n$ feature vectors on a unit hypersphere in $\mathbb{R}^d$.
no code implementations • 3 Dec 2020 • Fengchao Xiong, Shuyin Tao, Jun Zhou, Jianfeng Lu, Jiantao Zhou, Yuntao Qian
This model first projects the observed HSIs into a low-dimensional orthogonal subspace, and then represents the projected image with a multidimensional dictionary.
no code implementations • ICLR 2021 • Andrea Agazzi, Jianfeng Lu
We study the problem of policy optimization for infinite-horizon discounted Markov Decision Processes with softmax policy and nonlinear function approximation trained with policy gradient algorithms.
no code implementations • 22 Oct 2020 • Zhiyan Ding, Qin Li, Jianfeng Lu, Stephen J. Wright
We investigate the computational complexity of RC-ULMC and compare it with the classical ULMC for strongly log-concave probability distributions.
no code implementations • 3 Oct 2020 • Zhiyan Ding, Qin Li, Jianfeng Lu, Stephen J. Wright
We investigate the total complexity of RC-LMC and compare it with the classical LMC for log-concave probability distributions.
no code implementations • 30 Sep 2020 • Rong Ge, Holden Lee, Jianfeng Lu, Andrej Risteski
We give a algorithm for exact sampling from the Bingham distribution $p(x)\propto \exp(x^\top A x)$ on the sphere $\mathcal S^{d-1}$ with expected runtime of $\operatorname{poly}(d, \lambda_{\max}(A)-\lambda_{\min}(A))$.
1 code implementation • 21 Jul 2020 • Kaitao Song, Xu Tan, Jianfeng Lu
Neural machine translation (NMT) generates the next target token given as input the previous ground truth target tokens during training while the previous generated target tokens during inference, which causes discrepancy between training and inference as well as error propagation, and affects the translation accuracy.
1 code implementation • 7 Jun 2020 • Zhenbo Song, Jianfeng Lu, Tong Zhang, Hongdong Li
In this paper, we propose a monocular camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
no code implementations • 27 Apr 2020 • Kaitao Song, Hao Sun, Xu Tan, Tao Qin, Jianfeng Lu, Hongzhi Liu, Tie-Yan Liu
While pre-training and fine-tuning, e. g., BERT~\citep{devlin2018bert}, GPT-2~\citep{radford2019language}, have achieved great success in language understanding and generation tasks, the pre-trained models are usually too big for online deployment in terms of both memory cost and inference speed, which hinders them from practical online usage.
6 code implementations • NeurIPS 2020 • Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu
Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for pre-training to address this problem.
no code implementations • NeurIPS 2020 • Yulong Lu, Jianfeng Lu
In particular, the size of neural network can grow exponentially in $d$ when $1$-Wasserstein distance is used as the discrepancy, whereas for both MMD and KSD the size of neural network only depends on $d$ at most polynomially.
no code implementations • 11 Mar 2020 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a new continuum limit of deep residual networks, which enjoys a good landscape in the sense that every local minimizer is global.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
no code implementations • 7 Feb 2020 • Jiequn Han, Jianfeng Lu, Mo Zhou
We propose a new method to solve eigenvalue problems for linear and semilinear second order differential operators in high dimensions based on deep neural networks.
no code implementations • 9 Jan 2020 • Jianfeng Lu, Zuowei Shen, Haizhao Yang, Shijun Zhang
This paper establishes the (nearly) optimal approximation error characterization of deep rectified linear unit (ReLU) networks for smooth functions in terms of both width and depth simultaneously.
no code implementations • 11 Nov 2019 • Ya Sun, Minxian Li, Jianfeng Lu
We can easily measure the similarity of two vehicle images by computing the Euclidean distance of the features from FC layer.
no code implementations • 8 Nov 2019 • Rong Ge, Holden Lee, Jianfeng Lu
Estimating the normalizing constant of an unnormalized probability distribution has important applications in computer science, statistical physics, machine learning, and statistics.
no code implementations • 25 Sep 2019 • Andrea Agazzi, Jianfeng Lu
We then give examples of such convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.
no code implementations • 27 May 2019 • Andrea Agazzi, Jianfeng Lu
We finally give examples of our convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.
no code implementations • 23 May 2019 • Yulong Lu, Jianfeng Lu, James Nolen
A fundamental problem in Bayesian inference and statistical machine learning is to efficiently sample from multimodal distributions.
6 code implementations • 7 May 2019 • Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu
Pre-training and fine-tuning, e. g., BERT, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks.
no code implementations • 7 May 2019 • Yingzhou Li, Jianfeng Lu, Anqi Mao
A novel solve-training framework is proposed to train neural network in representing low dimensional solution maps of physical models.
no code implementations • 18 Mar 2019 • Ping Yu, Kaitao Song, Jianfeng Lu
Recently, deep neural networks have significant progress and successful application in various fields, but they are found vulnerable to attack instances, e. g., adversarial examples.
1 code implementation • 12 Feb 2019 • Zhe Wang, Yingzhou Li, Jianfeng Lu
We develop an efficient algorithm, coordinate descent FCI (CDFCI), for the electronic structure ground state calculation in the configuration interaction framework.
Chemical Physics Computational Physics
no code implementations • 9 Feb 2019 • Lei Li, Yingzhou Li, Jian-Guo Liu, Zibu Liu, Jianfeng Lu
We propose in this work RBM-SVGD, a stochastic version of Stein Variational Gradient Descent (SVGD) method for efficiently sampling from a given probability measure and thus useful for Bayesian inference.
no code implementations • journal 2019 • Zar Nawab Khan Swati, Qinghua Zhao3, Muhammad Kabir, Farman Ali, Ali Zakir, Saeed Ahmad, Jianfeng Lu
It is necessary to design a feature extraction framework to reduce this gap without using handcrafted features by encoding/combining low-level and high-level features.
no code implementations • 15 Dec 2018 • Lian-Tao Wang, Qingwu Li, Jianfeng Lu
In this paper, we propose a voting scheme involving not only the definite negative instances but also the ambiguous positive instances to make use of the extra useful information in the weakly labelled positive bags.
no code implementations • 1 Nov 2018 • Kaitao Song, Xu Tan, Furong Peng, Jianfeng Lu
The encoder-decoder is the typical framework for Neural Machine Translation (NMT), and different structures have been developed for improving the translation performance.
no code implementations • ECCV 2018 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, Anton Van Den Hengel
Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge.
1 code implementation • ECCV 2018 • Xiaofeng Han, Chuong Nguyen, ShaoDi You, Jianfeng Lu
Water bodies, such as puddles and flooded areas, on and off road pose significant risks to autonomous cars.
1 code implementation • COLING 2018 • Kaitao Song, Xu Tan, Di He, Jianfeng Lu, Tao Qin, Tie-Yan Liu
In this work we propose Double Path Networks for Sequence to Sequence learning (DPN-S2S), which leverage the advantages of both models by using double path information fusion.
no code implementations • 21 May 2018 • Jing An, Jianfeng Lu, Lexing Ying
The resulting SME of Langevin type extracts more information about the ASGD dynamics and elucidates the relationship between different types of stochastic gradient algorithms.
1 code implementation • 18 May 2018 • Yingzhou Li, Xiuyuan Cheng, Jianfeng Lu
Theoretical analysis of the approximation power of Butterfly-Net to the Fourier representation of input data shows that the error decays exponentially as the depth increases.
no code implementations • ICLR 2019 • Wei Zhu, Qiang Qiu, Bao Wang, Jianfeng Lu, Guillermo Sapiro, Ingrid Daubechies
Deep neural networks (DNNs) typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed.
no code implementations • 10 May 2018 • Jianfeng Lu, Yulong Lu, James Nolen
We study an interacting particle system in $\mathbf{R}^d$ motivated by Stein variational gradient descent [Q. Liu and D. Wang, NIPS 2016], a deterministic algorithm for sampling from a given probability density with unknown normalization.
no code implementations • 28 Feb 2018 • Yuehaw Khoo, Jianfeng Lu, Lexing Ying
In this note we propose a method based on artificial neural network to study the transition between states governed by stochastic processes.
no code implementations • 21 Nov 2017 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, Anton Van Den Hengel
Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge.
no code implementations • 19 Nov 2017 • Jun-Jie Zhang, Qi Wu, Jian Zhang, Chunhua Shen, Jianfeng Lu
These comments can be a description of the image, or some objects, attributes, scenes in it, which are normally used as the user-provided tags.
1 code implementation • 11 Jul 2017 • Yuehaw Khoo, Jianfeng Lu, Lexing Ying
The representability of such quantity using a neural-network can be justified by viewing the neural-network as performing time evolution to find the solutions to the PDE.
Numerical Analysis 65Nxx
1 code implementation • 23 May 2017 • Akihiko Nishimura, David Dunson, Jianfeng Lu
Hamiltonian Monte Carlo has emerged as a standard tool for posterior computation.
Computation
no code implementations • 4 Dec 2016 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu
Recent state-of-the-art approaches to multi-label image classification exploit the label dependencies in an image, at global level, largely improving the labeling capacity.
1 code implementation • 8 Aug 2014 • Jianfeng Lu, Christian B. Mendl
We develop an efficient algorithm for a spatially inhomogeneous matrix-valued quantum Boltzmann equation derived from the Hubbard model.
Computational Physics Mesoscale and Nanoscale Physics