no code implementations • 12 Oct 2024 • Yuling Jiao, Huazhen Lin, Yuchen Luo, Jerry Zhijian Yang
This paper presents a framework for deep transfer learning, which aims to leverage information from multi-domain upstream data with a large number of samples $n$ to a single-domain downstream task with a considerably smaller number of samples $m$, where $m \ll n$, in order to enhance performance on downstream task.
no code implementations • 9 Sep 2024 • Yuling Jiao, Yang Wang, Bokai Yan
We derive upper bounds on the approximation error of RNNs for H\"older smooth functions, in the sense that the output at each time step of an RNN can approximate a H\"older function that depends only on past and current information, termed a past-dependent function.
1 code implementation • 16 Aug 2024 • Chenguang Duan, Yuling Jiao, Huazhen Lin, Wensen Ma, Jerry Zhijian Yang
Learning a data representation for downstream supervised learning tasks under unlabeled scenario is both critical and challenging.
no code implementations • 12 Jul 2024 • Yuling Jiao, Ruoxuan Li, Peiying Wu, Jerry Zhijian Yang, Pingwen Zhang
In this work, we address a foundational question in the theoretical analysis of the Deep Ritz Method (DRM) under the over-parameteriztion regime: Given a target precision level, how can one determine the appropriate number of training samples, the key architectural parameters of the neural networks, the step size for the projected gradient descent optimization procedure, and the requisite number of iterations, such that the output of the gradient descent process closely approximates the true solution of the underlying partial differential equation to the specified precision?
no code implementations • 18 Jun 2024 • Z. T. Wang, Qiuhao Chen, Yuxuan Du, Z. H. Yang, Xiaoxia Cai, Kaixuan Huang, Jingning Zhang, Kai Xu, Jun Du, Yinan Li, Yuling Jiao, Xingyao Wu, Wu Liu, Xiliang Lu, Huikai Xu, Yirong Jin, Ruixia Wang, Haifeng Yu, S. P. Zhao
To effectively implement quantum algorithms on noisy intermediate-scale quantum (NISQ) processors is a central task in modern quantum technology.
no code implementations • 21 May 2024 • Yuling Jiao, Lican Kang, Jin Liu, Heng Peng, Heng Zuo
Deep nonparametric regression, characterized by the utilization of deep neural networks to learn target functions, has emerged as a focus of research attention in recent years.
no code implementations • 19 May 2024 • Yuling Jiao, Yanming Lai, Yang Wang
We present error bound in terms of the sample size $n$ and our work provides guidance on how to set the network depth, width, step size, and number of iterations for the projected gradient descent algorithm.
1 code implementation • 9 May 2024 • Zhao Ding, Chenguang Duan, Yuling Jiao, Ruoxuan Li, Jerry Zhijian Yang, Pingwen Zhang
We propose the characteristic generator, a novel one-step generative model that combines the efficiency of sampling in Generative Adversarial Networks (GANs) with the stable performance of flow-based models.
no code implementations • 20 Apr 2024 • Yuling Jiao, Lican Kang, Huazhen Lin, Jin Liu, Heng Zuo
Our theoretical analysis encompasses the establishment of end-to-end error analysis for learning distributions via the latent Schr{\"o}dinger bridge diffusion model.
no code implementations • 3 Apr 2024 • Yuling Jiao, Yanming Lai, Yang Wang, Bokai Yan
We present theoretical convergence guarantees for ODE-based generative models, specifically flow matching.
no code implementations • 31 Mar 2024 • Yuan Gao, Jian Huang, Yuling Jiao, Shurong Zheng
We establish non-asymptotic error bounds for the distribution estimator based on CNFs, in terms of the Wasserstein-2 distance.
1 code implementation • 2 Feb 2024 • Jinyuan Chang, Zhao Ding, Yuling Jiao, Ruoxuan Li, Jerry Zhijian Yang
We introduce an ordinary differential equation (ODE) based deep generative method for learning conditional distributions, named Conditional F\"ollmer Flow.
no code implementations • 9 Jan 2024 • Zhao Ding, Chenguang Duan, Yuling Jiao, Jerry Zhijian Yang
We propose SDORE, a semi-supervised deep Sobolev regressor, for the nonparametric estimation of the underlying regression function and its gradient.
no code implementations • 19 Dec 2023 • Di wu, Yuling Jiao, Li Shen, Haizhao Yang, Xiliang Lu
In this paper, we establish a non-asymptotic estimation error of pessimistic offline RL using general neural network approximation with $\mathcal{C}$-mixing data regarding the structure of networks, the dimension of datasets, and the concentrability of data coverage, under mild assumptions.
no code implementations • 20 Nov 2023 • Yuan Gao, Jian Huang, Yuling Jiao
Gaussian denoising has emerged as a powerful method for constructing simulation-free continuous normalizing flows for generative modeling.
no code implementations • 11 Oct 2023 • Zhan Yu, Qiuhao Chen, Yuling Jiao, Yinan Li, Xiliang Lu, Xin Wang, Jerry Zhijian Yang
Our results provide a theoretical foundation for designing practical PQCs and quantum neural networks for machine learning tasks that can be implemented on near-term quantum devices, paving the way for the advancement of quantum machine learning.
no code implementations • 2 Sep 2023 • Changyu Liu, Yuling Jiao, Junhui Wang, Jian Huang
For the quadratic loss in nonparametric regression, we show that the adversarial excess risk bound can be improved over those for a general loss.
no code implementations • 24 Jun 2023 • Chenguang Duan, Yuling Jiao, Xiliang Lu, Jerry Zhijian Yang
In this paper, we introduce CDII-PINNs, a computationally efficient method for solving CDII using PINNs in the framework of Tikhonov regularization.
no code implementations • 1 May 2023 • Guohao Shen, Yuling Jiao, Yuanyuan Lin, Jian Huang
We establish error bounds for simultaneously approximating $C^s$ smooth functions and their derivatives using RePU-activated deep neural networks.
no code implementations • 28 Mar 2023 • Yuling Jiao, Di Li, Xiliang Lu, Jerry Zhijian Yang, Cheng Yuan
With the recent study of deep learning in scientific computation, the Physics-Informed Neural Networks (PINNs) method has drawn widespread attention for solving Partial Differential Equations (PDEs).
no code implementations • 5 Feb 2023 • Yuling Jiao, Yanming Lai, Yang Wang, Haizhao Yang, Yunfei Yang
This paper analyzes the convergence rate of a deep Galerkin method for the weak solution (DGMW) of second-order elliptic partial differential equations on $\mathbb{R}^d$ with Dirichlet, Neumann, and Robin boundary conditions, respectively.
no code implementations • 21 Jul 2022 • Guohao Shen, Yuling Jiao, Yuanyuan Lin, Joel L. Horowitz, Jian Huang
We propose a penalized nonparametric approach to estimating the quantile regression process (QRP) in a nonseparable model using rectifier quadratic unit (ReQU) activated deep neural networks and introduce a novel penalty function to enforce non-crossing of quantile regression curves.
no code implementations • 14 Apr 2022 • Qiuhao Chen, Yuxuan Du, Qi Zhao, Yuling Jiao, Xiliang Lu, Xingyao Wu
We systematically evaluate the performance of our proposal in compiling quantum operators with both inverse-closed and inverse-free universal basis sets.
no code implementations • 24 Jan 2022 • Yuling Jiao, Yang Wang, Yunfei Yang
This paper studies the approximation capacity of ReLU neural networks with norm constraint on the weights.
1 code implementation • 19 Dec 2021 • Shiao Liu, Xingyu Zhou, Yuling Jiao, Jian Huang
The proposed approach uses a conditional generator to transform a known distribution to the target conditional distribution.
no code implementations • 29 Nov 2021 • Yuling Jiao, Dingwei Li, Min Liu, Xiangliang Lu, Yuanyuan Yang
In this paper, we consider recovering $n$ dimensional signals from $m$ binary measurements corrupted by noises and sign flips under the assumption that the target signals have low generative intrinsic dimension, i. e., the target signals can be approximately generated via an $L$-Lipschitz generator $G: \mathbb{R}^k\rightarrow\mathbb{R}^{n}, k\ll n$.
no code implementations • 21 Nov 2021 • Peili Li, Yuling Jiao, Xiliang Lu, Lican Kang
In this work, we consider the algorithm to the (nonlinear) regression problems with $\ell_0$ penalty.
no code implementations • NeurIPS 2021 • Shiao Liu, Yunfei Yang, Jian Huang, Yuling Jiao, Yang Wang
Our results are also applicable to the Wasserstein bidirectional GAN if the target distribution is assumed to have a bounded support.
no code implementations • 6 Oct 2021 • Xingdong Feng, Yuan Gao, Jian Huang, Yuling Jiao, Xu Liu
We propose a relative entropy gradient sampler (REGS) for sampling from unnormalized distributions.
no code implementations • 18 Sep 2021 • Yuling Jiao, Dingwei Li, Min Liu, Xiliang Lu
Recovering sparse signals from observed data is an important topic in signal/imaging processing, statistics and machine learning.
no code implementations • 10 Jul 2021 • Yuling Jiao, Lican Kang, Yanyan Liu, Youzhou Zhou
Schr\"{o}dinger-F\"{o}llmer sampler (SFS) is a novel and efficient approach for sampling from possibly unnormalized distributions without ergodicity.
no code implementations • 19 Jun 2021 • Gefei Wang, Yuling Jiao, Qian Xu, Yang Wang, Can Yang
At the sample level, we derive our Schr\"{o}dinger Bridge algorithm by plugging the drift term estimated by a deep score estimator and a deep density ratio estimator into the Euler-Maruyama method.
no code implementations • 27 May 2021 • Jian Huang, Yuling Jiao, Zhen Li, Shiao Liu, Yang Wang, Yunfei Yang
This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples.
no code implementations • 1 May 2021 • Guohao Shen, Yuling Jiao, Yuanyuan Lin, Jian Huang
To establish these results, we derive an upper bound for the covering number for the class of general convolutional neural networks with a bias term in each convolutional layer, and derive new results on the approximation power of CNNs for any uniformly-continuous target functions.
no code implementations • 28 Feb 2021 • Yuling Jiao, Yanming Lai, Xiliang Lu, Fengru Wang, Jerry Zhijian Yang, Yuanyuan Yang
In this paper, we construct neural networks with ReLU, sine and $2^x$ as activation functions.
no code implementations • 1 Jan 2021 • Xu Liao, Jin Liu, Tianwen Wen, Yuling Jiao, Jian Huang
At the population level, we formulate the ideal representation learning task as that of finding a nonlinear map that minimizes the sum of losses characterizing conditional independence (with RKHS) and disentanglement (with GAN).
Ranked #4 on Image Classification on Kuzushiji-MNIST
no code implementations • 1 Jan 2021 • Jian Huang, Yuling Jiao, Xu Liao, Jin Liu, Zhou Yu
We provide strong statistical guarantees for the learned representation by establishing an upper bound on the excess error of the objective function and show that it reaches the nonparametric minimax rate under mild conditions.
no code implementations • 11 Dec 2020 • Yuan Gao, Jian Huang, Yuling Jiao, Jin Liu, Xiliang Lu, Zhijian Yang
The key task in training is the estimation of the density ratios or differences that determine the residual maps.
1 code implementation • 10 Jun 2020 • Jian Huang, Yuling Jiao, Xu Liao, Jin Liu, Zhou Yu
We propose a deep dimension reduction approach to learning representations with these characteristics.
no code implementations • 7 Feb 2020 • Yuan Gao, Jian Huang, Yuling Jiao, Jin Liu
We then solve the McKean-Vlasov equation numerically using the forward Euler iteration, where the forward Euler map depends on the density ratio (density difference) between the distribution at current iteration and the underlying target distribution.
no code implementations • 27 Jan 2020 • Jian Huang, Yuling Jiao, Lican Kang, Jin Liu, Yanyan Liu, Xiliang Lu, Yuanyuan Yang
Based on this KKT system, a built-in working set with a relatively small size is first determined using the sum of primal and dual variables generated from the previous iteration, then the primal variable is updated by solving a least-squares problem on the working set and the dual variable updated based on a closed-form expression.
no code implementations • 16 Jan 2020 • Jian Huang, Yuling Jiao, Lican Kang, Jin Liu, Yanyan Liu, Xiliang Lu
Feature selection is important for modeling high-dimensional data, where the number of variables can be much larger than the sample size.
no code implementations • 14 Jun 2019 • Jian-Feng Cai, Yuling Jiao, Xiliang Lu, Juntao You
Sparse phase retrieval plays an important role in many fields of applied science and thus attracts lots of attention.
no code implementations • 25 Feb 2019 • Shunkang Zhang, Yuan Gao, Yuling Jiao, Jin Liu, Yang Wang, Can Yang
To address the challenges in learning deep generative models (e. g., the blurriness of variational auto-encoder and the instability of training generative adversarial networks, we propose a novel deep generative model, named Wasserstein-Wasserstein auto-encoders (WWAE).
1 code implementation • 24 Jan 2019 • Yuan Gao, Yuling Jiao, Yang Wang, Yao Wang, Can Yang, Shunkang Zhang
We propose a general framework to learn deep generative models via \textbf{V}ariational \textbf{Gr}adient Fl\textbf{ow} (VGrow) on probability spaces.
no code implementations • 9 Oct 2018 • Jian Huang, Yuling Jiao, Xiliang Lu, Yueyong Shi, Qinglong Yang
We propose a semismooth Newton algorithm for pathwise optimization (SNAP) for the LASSO and Enet in sparse, high-dimensional linear regression.
no code implementations • 3 Mar 2014 • Yuling Jiao, Bangti Jin, Xiliang Lu
We develop a primal dual active set with continuation algorithm for solving the \ell^0-regularized least-squares problem that frequently arises in compressed sensing.
no code implementations • 4 Oct 2013 • Jian Huang, Yuling Jiao, Bangti Jin, Jin Liu, Xiliang Lu, Can Yang
In this paper, we consider the problem of recovering a sparse signal based on penalized least squares formulations.