Search Results for author: Jianfeng Lu

Found 92 papers, 17 papers with code

Numerical scheme for a spatially inhomogeneous matrix-valued quantum Boltzmann equation

1 code implementation8 Aug 2014 Jianfeng Lu, Christian B. Mendl

We develop an efficient algorithm for a spatially inhomogeneous matrix-valued quantum Boltzmann equation derived from the Hubbard model.

Computational Physics Mesoscale and Nanoscale Physics

Multi-Label Image Classification with Regional Latent Semantic Dependencies

no code implementations4 Dec 2016 Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu

Recent state-of-the-art approaches to multi-label image classification exploit the label dependencies in an image, at global level, largely improving the labeling capacity.

Classification General Classification +1

Discontinuous Hamiltonian Monte Carlo for discrete parameters and discontinuous likelihoods

1 code implementation23 May 2017 Akihiko Nishimura, David Dunson, Jianfeng Lu

Hamiltonian Monte Carlo has emerged as a standard tool for posterior computation.

Computation

Solving parametric PDE problems with artificial neural networks

1 code implementation11 Jul 2017 Yuehaw Khoo, Jianfeng Lu, Lexing Ying

The representability of such quantity using a neural-network can be justified by viewing the neural-network as performing time evolution to find the solutions to the PDE.

Numerical Analysis 65Nxx

Kill Two Birds with One Stone: Weakly-Supervised Neural Network for Image Annotation and Tag Refinement

no code implementations19 Nov 2017 Jun-Jie Zhang, Qi Wu, Jian Zhang, Chunhua Shen, Jianfeng Lu

These comments can be a description of the image, or some objects, attributes, scenes in it, which are normally used as the user-provided tags.

Retrieval TAG

Asking the Difficult Questions: Goal-Oriented Visual Question Generation via Intermediate Rewards

no code implementations21 Nov 2017 Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, Anton Van Den Hengel

Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge.

Informativeness Question Generation +2

Solving for high dimensional committor functions using artificial neural networks

no code implementations28 Feb 2018 Yuehaw Khoo, Jianfeng Lu, Lexing Ying

In this note we propose a method based on artificial neural network to study the transition between states governed by stochastic processes.

Vocal Bursts Intensity Prediction

Scaling limit of the Stein variational gradient descent: the mean field regime

no code implementations10 May 2018 Jianfeng Lu, Yulong Lu, James Nolen

We study an interacting particle system in $\mathbf{R}^d$ motivated by Stein variational gradient descent [Q. Liu and D. Wang, NIPS 2016], a deterministic algorithm for sampling from a given probability density with unknown normalization.

Stop memorizing: A data-dependent regularization framework for intrinsic pattern learning

no code implementations ICLR 2019 Wei Zhu, Qiang Qiu, Bao Wang, Jianfeng Lu, Guillermo Sapiro, Ingrid Daubechies

Deep neural networks (DNNs) typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed.

Butterfly-Net: Optimal Function Representation Based on Convolutional Neural Networks

1 code implementation18 May 2018 Yingzhou Li, Xiuyuan Cheng, Jianfeng Lu

Theoretical analysis of the approximation power of Butterfly-Net to the Fourier representation of input data shows that the error decays exponentially as the depth increases.

Stochastic modified equations for the asynchronous stochastic gradient descent

no code implementations21 May 2018 Jing An, Jianfeng Lu, Lexing Ying

The resulting SME of Langevin type extracts more information about the ASGD dynamics and elucidates the relationship between different types of stochastic gradient algorithms.

Double Path Networks for Sequence to Sequence Learning

1 code implementation COLING 2018 Kaitao Song, Xu Tan, Di He, Jianfeng Lu, Tao Qin, Tie-Yan Liu

In this work we propose Double Path Networks for Sequence to Sequence learning (DPN-S2S), which leverage the advantages of both models by using double path information fusion.

Single Image Water Hazard Detection using FCN with Reflection Attention Units

1 code implementation ECCV 2018 Xiaofeng Han, Chuong Nguyen, ShaoDi You, Jianfeng Lu

Water bodies, such as puddles and flooded areas, on and off road pose significant risks to autonomous cars.

Goal-Oriented Visual Question Generation via Intermediate Rewards

no code implementations ECCV 2018 Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, Anton Van Den Hengel

Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge.

Informativeness Question Generation +2

Hybrid Self-Attention Network for Machine Translation

no code implementations1 Nov 2018 Kaitao Song, Xu Tan, Furong Peng, Jianfeng Lu

The encoder-decoder is the typical framework for Neural Machine Translation (NMT), and different structures have been developed for improving the translation performance.

Machine Translation NMT +1

Weakly supervised segment annotation via expectation kernel density estimation

no code implementations15 Dec 2018 Lian-Tao Wang, Qingwu Li, Jianfeng Lu

In this paper, we propose a voting scheme involving not only the definite negative instances but also the ambiguous positive instances to make use of the extra useful information in the weakly labelled positive bags.

Density Estimation

Content-Based Brain Tumor Retrieval for MR Images Using Transfer Learning

no code implementations journal 2019 Zar Nawab Khan Swati, Qinghua Zhao3, Muhammad Kabir, Farman Ali, Ali Zakir, Saeed Ahmad, Jianfeng Lu

It is necessary to design a feature extraction framework to reduce this gap without using handcrafted features by encoding/combining low-level and high-level features.

Content-Based Image Retrieval Metric Learning +3

A stochastic version of Stein Variational Gradient Descent for efficient sampling

no code implementations9 Feb 2019 Lei Li, Yingzhou Li, Jian-Guo Liu, Zibu Liu, Jianfeng Lu

We propose in this work RBM-SVGD, a stochastic version of Stein Variational Gradient Descent (SVGD) method for efficiently sampling from a given probability measure and thus useful for Bayesian inference.

Bayesian Inference

Coordinate descent full configuration interaction

1 code implementation12 Feb 2019 Zhe Wang, Yingzhou Li, Jianfeng Lu

We develop an efficient algorithm, coordinate descent FCI (CDFCI), for the electronic structure ground state calculation in the configuration interaction framework.

Chemical Physics Computational Physics

Generating Adversarial Examples With Conditional Generative Adversarial Net

no code implementations18 Mar 2019 Ping Yu, Kaitao Song, Jianfeng Lu

Recently, deep neural networks have significant progress and successful application in various fields, but they are found vulnerable to attack instances, e. g., adversarial examples.

Generative Adversarial Network

Variational training of neural network approximations of solution maps for physical models

no code implementations7 May 2019 Yingzhou Li, Jianfeng Lu, Anqi Mao

A novel solve-training framework is proposed to train neural network in representing low dimensional solution maps of physical models.

MASS: Masked Sequence to Sequence Pre-training for Language Generation

7 code implementations7 May 2019 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu

Pre-training and fine-tuning, e. g., BERT, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks.

Conversational Response Generation Response Generation +5

Accelerating Langevin Sampling with Birth-death

no code implementations23 May 2019 Yulong Lu, Jianfeng Lu, James Nolen

A fundamental problem in Bayesian inference and statistical machine learning is to efficiently sample from multimodal distributions.

Bayesian Inference

Temporal-difference learning with nonlinear function approximation: lazy training and mean field regimes

no code implementations27 May 2019 Andrea Agazzi, Jianfeng Lu

We finally give examples of our convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.

Temporal-difference learning for nonlinear value function approximation in the lazy training regime

no code implementations25 Sep 2019 Andrea Agazzi, Jianfeng Lu

We then give examples of such convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.

Estimating Normalizing Constants for Log-Concave Distributions: Algorithms and Lower Bounds

no code implementations8 Nov 2019 Rong Ge, Holden Lee, Jianfeng Lu

Estimating the normalizing constant of an unnormalized probability distribution has important applications in computer science, statistical physics, machine learning, and statistics.

Part-based Multi-stream Model for Vehicle Searching

no code implementations11 Nov 2019 Ya Sun, Minxian Li, Jianfeng Lu

We can easily measure the similarity of two vehicle images by computing the Euclidean distance of the features from FC layer.

Metric Learning Retrieval

Deep Network Approximation for Smooth Functions

no code implementations9 Jan 2020 Jianfeng Lu, Zuowei Shen, Haizhao Yang, Shijun Zhang

This paper establishes the (nearly) optimal approximation error characterization of deep rectified linear unit (ReLU) networks for smooth functions in terms of both width and depth simultaneously.

valid

Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion Monte Carlo like approach

no code implementations7 Feb 2020 Jiequn Han, Jianfeng Lu, Mo Zhou

We propose a new method to solve eigenvalue problems for linear and semilinear second order differential operators in high dimensions based on deep neural networks.

A Mean-field Analysis of Deep ResNet and Beyond:Towards Provable Optimization Via Overparameterization From Depth

no code implementations ICLR Workshop DeepDiffEq 2019 Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying

Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.

A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth

no code implementations11 Mar 2020 Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying

Specifically, we propose a new continuum limit of deep residual networks, which enjoys a good landscape in the sense that every local minimizer is global.

A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions

no code implementations NeurIPS 2020 Yulong Lu, Jianfeng Lu

In particular, the size of neural network can grow exponentially in $d$ when $1$-Wasserstein distance is used as the discrepancy, whereas for both MMD and KSD the size of neural network only depends on $d$ at most polynomially.

MPNet: Masked and Permuted Pre-training for Language Understanding

6 code implementations NeurIPS 2020 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu

Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for pre-training to address this problem.

Ranked #16 on Only Connect Walls Dataset Task 1 (Grouping) on OCW (using extra training data)

Language Modelling Masked Language Modeling +3

LightPAFF: A Two-Stage Distillation Framework for Pre-training and Fine-tuning

no code implementations27 Apr 2020 Kaitao Song, Hao Sun, Xu Tan, Tao Qin, Jianfeng Lu, Hongzhi Liu, Tie-Yan Liu

While pre-training and fine-tuning, e. g., BERT~\citep{devlin2018bert}, GPT-2~\citep{radford2019language}, have achieved great success in language understanding and generation tasks, the pre-trained models are usually too big for online deployment in terms of both memory cost and inference speed, which hinders them from practical online usage.

Knowledge Distillation Language Modelling

End-to-end Learning for Inter-Vehicle Distance and Relative Velocity Estimation in ADAS with a Monocular Camera

1 code implementation7 Jun 2020 Zhenbo Song, Jianfeng Lu, Tong Zhang, Hongdong Li

In this paper, we propose a monocular camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.

Optical Flow Estimation

Neural Machine Translation with Error Correction

1 code implementation21 Jul 2020 Kaitao Song, Xu Tan, Jianfeng Lu

Neural machine translation (NMT) generates the next target token given as input the previous ground truth target tokens during training while the previous generated target tokens during inference, which causes discrepancy between training and inference as well as error propagation, and affects the translation accuracy.

Machine Translation NMT +1

Efficient sampling from the Bingham distribution

no code implementations30 Sep 2020 Rong Ge, Holden Lee, Jianfeng Lu, Andrej Risteski

We give a algorithm for exact sampling from the Bingham distribution $p(x)\propto \exp(x^\top A x)$ on the sphere $\mathcal S^{d-1}$ with expected runtime of $\operatorname{poly}(d, \lambda_{\max}(A)-\lambda_{\min}(A))$.

Random Coordinate Langevin Monte Carlo

no code implementations3 Oct 2020 Zhiyan Ding, Qin Li, Jianfeng Lu, Stephen J. Wright

We investigate the total complexity of RC-LMC and compare it with the classical LMC for log-concave probability distributions.

Random Coordinate Underdamped Langevin Monte Carlo

no code implementations22 Oct 2020 Zhiyan Ding, Qin Li, Jianfeng Lu, Stephen J. Wright

We investigate the computational complexity of RC-ULMC and compare it with the classical ULMC for strongly log-concave probability distributions.

Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime

no code implementations ICLR 2021 Andrea Agazzi, Jianfeng Lu

We study the problem of policy optimization for infinite-horizon discounted Markov Decision Processes with softmax policy and nonlinear function approximation trained with policy gradient algorithms.

SMDS-Net: Model Guided Spectral-Spatial Network for Hyperspectral Image Denoising

no code implementations3 Dec 2020 Fengchao Xiong, Shuyin Tao, Jun Zhou, Jianfeng Lu, Jiantao Zhou, Yuntao Qian

This model first projects the observed HSIs into a low-dimensional orthogonal subspace, and then represents the projected image with a multidimensional dictionary.

Hyperspectral Image Denoising Image Denoising

Neural Collapse with Cross-Entropy Loss

no code implementations15 Dec 2020 Jianfeng Lu, Stefan Steinerberger

We consider the variational problem of cross-entropy loss with $n$ feature vectors on a unit hypersphere in $\mathbb{R}^d$.

Complexity of zigzag sampling algorithm for strongly log-concave distributions

no code implementations21 Dec 2020 Jianfeng Lu, Lihan Wang

We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions.

A Priori Generalization Analysis of the Deep Ritz Method for Solving High Dimensional Elliptic Equations

no code implementations5 Jan 2021 Jianfeng Lu, Yulong Lu, Min Wang

This paper concerns the a priori generalization analysis of the Deep Ritz Method (DRM) [W. E and B. Yu, 2017], a popular neural-network-based method for solving high dimensional partial differential equations.

Algebraic localization implies exponential localization in non-periodic insulators

no code implementations7 Jan 2021 Jianfeng Lu, Kevin D. Stubbs

In two and three spatial dimensions, it is well understood for periodic insulators that exponentially-localized Wannier functions exist if and only if there exists an orthonormal basis for the Fermi projection with finite second moment (i. e. all basis elements satisfy $\int |\boldsymbol{x}|^2 |w(\boldsymbol{x})|^2 \,\text{d}{\boldsymbol{x}} < \infty$).

Mathematical Physics Mesoscale and Nanoscale Physics Mathematical Physics

A Grid-free Approach for Simulating Sweep and Cyclic Voltammetry

1 code implementation9 Feb 2021 Alec J. Coffman, Jianfeng Lu, Joseph E. Subotnik

We present a new computational approach to simulate linear sweep and cyclic voltammetry experiments that does not require a discretized grid in space to quantify diffusion.

Chemical Physics

Incorporating Orientations into End-to-end Driving Model for Steering Control

no code implementations10 Mar 2021 Peng Wan, Zhenbo Song, Jianfeng Lu

In this paper, we present a novel end-to-end deep neural network model for autonomous driving that takes monocular image sequence as input, and directly generates the steering control angle.

Autonomous Driving Steering Control

A Priori Generalization Error Analysis of Two-Layer Neural Networks for Solving High Dimensional Schrödinger Eigenvalue Problems

no code implementations4 May 2021 Jianfeng Lu, Yulong Lu

We prove that the convergence rate of the generalization error is independent of the dimension $d$, under the a priori assumption that the ground state lies in a spectral Barron space.

On the Representation of Solutions to Elliptic PDEs in Barron Spaces

no code implementations NeurIPS 2021 Ziang Chen, Jianfeng Lu, Yulong Lu

Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments.

Statistical Numerical PDE : Fast Rate, Neural Scaling Law and When it’s Optimal

no code implementations NeurIPS Workshop DLDE 2021 Yiping Lu, Haoxuan Chen, Jianfeng Lu, Lexing Ying, Jose Blanchet

In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs).

Machine Learning For Elliptic PDEs: Fast Rate Generalization Bound, Neural Scaling Law and Minimax Optimality

no code implementations ICLR 2022 Yiping Lu, Haoxuan Chen, Jianfeng Lu, Lexing Ying, Jose Blanchet

In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs).

A Regularity Theory for Static Schrödinger Equations on $\mathbb{R}^d$ in Spectral Barron Spaces

no code implementations25 Jan 2022 Ziang Chen, Jianfeng Lu, Yulong Lu, Shengxuan Zhou

Spectral Barron spaces have received considerable interest recently as it is the natural function space for approximation theory of two-layer neural networks with a dimension-free convergence rate.

Single Time-scale Actor-critic Method to Solve the Linear Quadratic Regulator with Convergence Guarantees

no code implementations31 Jan 2022 Mo Zhou, Jianfeng Lu

We propose a single time-scale actor-critic algorithm to solve the linear quadratic regulator (LQR) problem.

Bilevel Optimization

Convergence for score-based generative modeling with polynomial complexity

no code implementations13 Jun 2022 Holden Lee, Jianfeng Lu, Yixin Tan

Using our guarantee, we give a theoretical analysis of score-based generative modeling, which transforms white-noise input into samples from a learned data distribution given score estimates at different noise scales.

Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge for Human Motion Prediction

no code implementations2 Aug 2022 Xiaoning Sun, Qiongjie Cui, Huaijiang Sun, Bin Li, Weiqing Li, Jianfeng Lu

Previous works on human motion prediction follow the pattern of building a mapping relation between the sequence observed and the one to be predicted.

Human motion prediction motion prediction +3

MonoSIM: Simulating Learning Behaviors of Heterogeneous Point Cloud Object Detectors for Monocular 3D Object Detection

1 code implementation19 Aug 2022 Han Sun, Zhaoxin Fan, Zhenbo Song, Zhicheng Wang, Kejian Wu, Jianfeng Lu

The insight behind introducing MonoSIM is that we propose to simulate the feature learning behaviors of a point cloud based detector for monocular detector during the training period.

Autonomous Driving Depth Estimation +4

A deep learning framework for geodesics under spherical Wasserstein-Fisher-Rao metric and its application for weighted sample generation

no code implementations25 Aug 2022 Yang Jing, Jiaheng Chen, Lei LI, Jianfeng Lu

In this paper, we develop a deep learning framework to compute the geodesics under the spherical WFR metric, and the learned geodesics can be adopted to generate weighted samples.

Bayesian Inference

On Representing Linear Programs by Graph Neural Networks

1 code implementation25 Sep 2022 Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin

In particular, the graph neural network (GNN) is considered a suitable ML model for optimization problems whose variables and constraints are permutation--invariant, for example, the linear program (LP).

Convergence of score-based generative modeling for general data distributions

no code implementations26 Sep 2022 Holden Lee, Jianfeng Lu, Yixin Tan

Score-based generative modeling (SGM) has grown to be a hugely successful method for learning to generate samples from complex data distributions such as that of images and audio.

Denoising

Human Joint Kinematics Diffusion-Refinement for Stochastic Motion Prediction

no code implementations12 Oct 2022 Dong Wei, Huaijiang Sun, Bin Li, Jianfeng Lu, Weiqing Li, Xiaoning Sun, Shengxiang Hu

This process offers a natural way to obtain the "whitened" latents without any trainable parameters, and human motion prediction can be regarded as the reverse diffusion process that converts the noise distribution into realistic future motions conditioned on the observed sequence.

motion prediction Stochastic Human Motion Prediction

On Representing Mixed-Integer Linear Programs by Graph Neural Networks

1 code implementation19 Oct 2022 Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin

While Mixed-integer linear programming (MILP) is NP-hard in general, practical MILP has received roughly 100--fold speedup in the past twenty years.

Neural Network Approximations of PDEs Beyond Linearity: A Representational Perspective

no code implementations21 Oct 2022 Tanya Marwah, Zachary C. Lipton, Jianfeng Lu, Andrej Risteski

We show that if composing a function with Barron norm $b$ with partial derivatives of $L$ produces a function of Barron norm at most $B_L b^p$, the solution to the PDE can be $\epsilon$-approximated in the $L^2$ sense by a function with Barron norm $O\left(\left(dB_L\right)^{\max\{p \log(1/ \epsilon), p^{\log(1/\epsilon)}\}}\right)$.

Regularized Stein Variational Gradient Flow

no code implementations15 Nov 2022 Ye He, Krishnakumar Balasubramanian, Bharath K. Sriperumbudur, Jianfeng Lu

In this work, we propose the Regularized Stein Variational Gradient Flow which interpolates between the Stein Variational Gradient Flow and the Wasserstein Gradient Flow.

A Structure-guided Effective and Temporal-lag Connectivity Network for Revealing Brain Disorder Mechanisms

no code implementations1 Dec 2022 Zhengwang Xia, Tao Zhou, Saqib Mamoon, Amani Alfakih, Jianfeng Lu

Brain network provides important insights for the diagnosis of many brain disorders, and how to effectively model the brain structure has become one of the core issues in the domain of brain imaging analysis.

Test-time Personalizable Forecasting of 3D Human Poses

no code implementations ICCV 2023 Qiongjie Cui, Huaijiang Sun, Jianfeng Lu, Weiqing Li, Bin Li, Hongwei Yi, Haofan Wang

Current motion forecasting approaches typically train a deep end-to-end model from the source domain data, and then apply it directly to target subjects.

Motion Forecasting

On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network

no code implementations29 Jan 2023 Shijun Zhang, Jianfeng Lu, Hongkai Zhao

This paper explores the expressive power of deep neural networks through the framework of function compositions.

Global Optimality of Elman-type RNN in the Mean-Field Regime

no code implementations12 Mar 2023 Andrea Agazzi, Jianfeng Lu, Sayan Mukherjee

We analyze Elman-type Recurrent Reural Networks (RNNs) and their training in the mean-field regime.

Vocal Bursts Type Prediction

Meta-Auxiliary Learning for Adaptive Human Pose Prediction

no code implementations13 Apr 2023 Qiongjie Cui, Huaijiang Sun, Jianfeng Lu, Bin Li, Weiqing Li

Predicting high-fidelity future human poses, from a historically observed sequence, is decisive for intelligent robots to interact with humans.

Auxiliary Learning Pose Prediction +1

Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks

no code implementations18 Apr 2023 Jing An, Jianfeng Lu

We study the convergence of stochastic gradient descent (SGD) for non-convex objective functions.

valid

Score-based Transport Modeling for Mean-Field Fokker-Planck Equations

no code implementations21 Apr 2023 Jianfeng Lu, Yue Wu, Yang Xiang

We use the score-based transport modeling method to solve the mean-field Fokker-Planck equations, which we call MSBTM.

Accelerate Langevin Sampling with Birth-Death process and Exploration Component

no code implementations6 May 2023 Lezhi Tan, Jianfeng Lu

Aiming at multimodality, we propose a new sampling method that takes advantage of both birth-death process and exploration component.

Enhanced Fine-grained Motion Diffusion for Text-driven Human Motion Synthesis

no code implementations23 May 2023 Dong Wei, Xiaoning Sun, Huaijiang Sun, Bin Li, Shengxiang Hu, Weiqing Li, Jianfeng Lu

The emergence of text-driven motion synthesis technique provides animators with great potential to create efficiently.

Motion Synthesis valid

Deep Network Approximation: Beyond ReLU to Diverse Activation Functions

no code implementations13 Jul 2023 Shijun Zhang, Jianfeng Lu, Hongkai Zhao

This paper explores the expressive power of deep neural networks for a diverse range of activation functions.

Riemannian Langevin Monte Carlo schemes for sampling PSD matrices with fixed rank

no code implementations8 Sep 2023 Tianmin Yu, Shixin Zheng, Jianfeng Lu, Govind Menon, Xiangxiong Zhang

This paper introduces two explicit schemes to sample matrices from Gibbs distributions on $\mathcal S^{n, p}_+$, the manifold of real positive semi-definite (PSD) matrices of size $n\times n$ and rank $p$.

Diffusion Methods for Generating Transition Paths

no code implementations19 Sep 2023 Luke Triplett, Jianfeng Lu

In this work, we seek to simulate rare transitions between metastable states using score-based generative models.

Learning Dense Flow Field for Highly-accurate Cross-view Camera Localization

no code implementations NeurIPS 2023 Zhenbo Song, Xianghui Ze, Jianfeng Lu, Yujiao Shi

We propose a novel end-to-end approach that leverages the learning of dense pixel-wise flow fields in pairs of ground and satellite images to calculate the camera pose.

Camera Localization Optical Flow Estimation

Solution for SMART-101 Challenge of ICCV Multi-modal Algorithmic Reasoning Task 2023

no code implementations10 Oct 2023 Xiangyu Wu, Yang Yang, Shengdong Xu, Yifeng Wu, QingGuo Chen, Jianfeng Lu

At the data level, inspired by the challenge paper, we categorized the whole questions into eight types and utilized the llama-2-chat model to directly generate the type for each question in a zero-shot manner.

object-detection Object Detection +3

Convergence of flow-based generative models via proximal gradient descent in Wasserstein space

no code implementations26 Oct 2023 Xiuyuan Cheng, Jianfeng Lu, Yixin Tan, Yao Xie

Flow-based generative models enjoy certain advantages in computing the data generation and the likelihood, and have recently shown competitive empirical performance.

Deep Equilibrium Based Neural Operators for Steady-State PDEs

no code implementations NeurIPS 2023 Tanya Marwah, Ashwini Pokle, J. Zico Kolter, Zachary C. Lipton, Jianfeng Lu, Andrej Risteski

Motivated by this observation, we propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE as the infinite-depth fixed point of an implicit operator layer using a black-box root solver and differentiates analytically through this fixed point resulting in $\mathcal{O}(1)$ training memory.

Learning Memory Kernels in Generalized Langevin Equations

no code implementations18 Feb 2024 Quanjun Lang, Jianfeng Lu

We introduce a novel approach for learning memory kernels in Generalized Langevin Equations.

regression

Adversarial Purification and Fine-tuning for Robust UDC Image Restoration

no code implementations21 Feb 2024 Zhenbo Song, Zhenyuan Zhang, Kaihao Zhang, Wenhan Luo, Zhaoxin Fan, Jianfeng Lu

This study delves into the enhancement of Under-Display Camera (UDC) image restoration models, focusing on their robustness against adversarial attacks.

Image Restoration

AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration

no code implementations11 Mar 2024 Zhenbo Song, Wenhao Gao, Kaihao Zhang, Wenhan Luo, Zhaoxin Fan, Jianfeng Lu

Extensive experiments demonstrate the efficacy of the degradation objective on state-of-the-art face restoration models.

Backdoor Attack

Gait Recognition from a Single Image using a Phase-Aware Gait Cycle Reconstruction Network

no code implementations ECCV 2020 Chi Xu, Yasushi Makihara, Xiang Li, Yasushi Yagi, Jianfeng Lu

Specifically, a phase estimation network is introduced for the input single image, and the gait cycle reconstruction network exploits the estimated phase to mitigate the dependence of an encoded feature on the phase of that single image.

Gait Recognition

A Mean Field Analysis Of Deep ResNet And Beyond: Towards Provably Optimization Via Overparameterization From Depth

no code implementations ICML 2020 Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying

Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.

Cannot find the paper you are looking for? You can Submit a new open access paper.