Search Results for author: Haizhao Yang

Found 44 papers, 13 papers with code

On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization

no code implementations23 Jan 2024 Ling Liang, Haizhao Yang

We consider a regularized expected reward optimization problem in the non-oblivious setting that covers many existing problems in reinforcement learning (RL).

Reinforcement Learning (RL)

Neural Network Approximation for Pessimistic Offline Reinforcement Learning

no code implementations19 Dec 2023 Di wu, Yuling Jiao, Li Shen, Haizhao Yang, Xiliang Lu

In this paper, we establish a non-asymptotic estimation error of pessimistic offline RL using general neural network approximation with $\mathcal{C}$-mixing data regarding the structure of networks, the dimension of datasets, and the concentrability of data coverage, under mild assumptions.

Offline RL reinforcement-learning +1

A Finite Expression Method for Solving High-Dimensional Committor Problems

no code implementations21 Jun 2023 Zezheng Song, Maria K. Cameron, Haizhao Yang

Central to TPT is the committor function, which describes the probability to hit the metastable state $B$ prior to $A$ from any given starting point of the phase space.

Spectral Clustering via Orthogonalization-Free Methods

1 code implementation16 May 2023 Qiyuan Pang, Haizhao Yang

Numerical results show that the proposed methods outperform Power Iteration-based methods and Graph Signal Filter in clustering quality and computation cost.

Clustering Dimensionality Reduction

Finite Expression Methods for Discovering Physical Laws from Data

no code implementations15 May 2023 Zhongyi Jiang, Chunmei Wang, Haizhao Yang

Nonlinear dynamics is a pervasive phenomenon observed in scientific and engineering disciplines.

Convergence Analysis of the Deep Galerkin Method for Weak Solutions

no code implementations5 Feb 2023 Yuling Jiao, Yanming Lai, Yang Wang, Haizhao Yang, Yunfei Yang

This paper analyzes the convergence rate of a deep Galerkin method for the weak solution (DGMW) of second-order elliptic partial differential equations on $\mathbb{R}^d$ with Dirichlet, Neumann, and Robin boundary conditions, respectively.

Deep Operator Learning Lessens the Curse of Dimensionality for PDEs

no code implementations28 Jan 2023 Ke Chen, Chunmei Wang, Haizhao Yang

Deep neural networks (DNNs) have achieved remarkable success in numerous domains, and their application to PDE-related problems has been rapidly advancing.

Operator learning

A Distributed Block Chebyshev-Davidson Algorithm for Parallel Spectral Clustering

1 code implementation8 Dec 2022 Qiyuan Pang, Haizhao Yang

We develop a distributed Block Chebyshev-Davidson algorithm to solve large-scale leading eigenvalue problems for spectral analysis in spectral clustering.

Clustering

What is the Solution for State-Adversarial Multi-Agent Reinforcement Learning?

1 code implementation6 Dec 2022 Songyang Han, Sanbao Su, Sihong He, Shuo Han, Haizhao Yang, Fei Miao

Additionally, we propose a Robust Multi-Agent Adversarial Actor-Critic (RMA3C) algorithm to learn robust policies for MARL agents under state uncertainties.

Multi-agent Reinforcement Learning reinforcement-learning +1

On Fast Simulation of Dynamical System with Neural Vector Enhanced Numerical Solver

1 code implementation7 Aug 2022 Zhongzhan Huang, Senwei Liang, Hong Zhang, Haizhao Yang, Liang Lin

The large-scale simulation of dynamical systems is critical in numerous scientific and engineering disciplines.

Computational Efficiency

The Lottery Ticket Hypothesis for Self-attention in Convolutional Neural Network

no code implementations16 Jul 2022 Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang, Liang Lin

Recently many plug-and-play self-attention modules (SAMs) are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).

Crowd Counting

Finite Expression Method for Solving High-Dimensional Partial Differential Equations

1 code implementation21 Jun 2022 Senwei Liang, Haizhao Yang

Designing efficient and accurate numerical solvers for high-dimensional partial differential equations (PDEs) remains a challenging and important topic in computational science and engineering, mainly due to the "curse of dimensionality" in designing numerical schemes that scale in dimension.

Vocal Bursts Intensity Prediction

Reinforced Inverse Scattering

no code implementations8 Jun 2022 Hanyang Jiang, Yuehaw Khoo, Haizhao Yang

Inverse wave scattering aims at determining the properties of an object using data on how the object scatters incoming waves.

Object reinforcement-learning +1

Neural Network Architecture Beyond Width and Depth

no code implementations19 May 2022 Zuowei Shen, Haizhao Yang, Shijun Zhang

It is proved by construction that height-$s$ ReLU NestNets with $\mathcal{O}(n)$ parameters can approximate $1$-Lipschitz continuous functions on $[0, 1]^d$ with an error $\mathcal{O}(n^{-(s+1)/d})$, while the optimal approximation error of standard ReLU networks with $\mathcal{O}(n)$ parameters is $\mathcal{O}(n^{-2/d})$.

IAE-Net: Integral Autoencoders for Discretization-Invariant Learning

1 code implementation10 Mar 2022 Yong Zheng Ong, Zuowei Shen, Haizhao Yang

Discretization invariant learning aims at learning in the infinite-dimensional function spaces with the capacity to process heterogeneous discrete representations of functions as inputs and/or outputs of a learning model.

Data Augmentation

Deep Nonparametric Estimation of Operators between Infinite Dimensional Spaces

no code implementations1 Jan 2022 Hao liu, Haizhao Yang, Minshuo Chen, Tuo Zhao, Wenjing Liao

Learning operators between infinitely dimensional spaces is an important learning task arising in wide applications in machine learning, imaging science, mathematical modeling and simulations, etc.

Deep Network Approximation in Terms of Intrinsic Parameters

no code implementations15 Nov 2021 Zuowei Shen, Haizhao Yang, Shijun Zhang

Furthermore, we show that the idea of learning a small number of parameters to achieve a good approximation can be numerically observed.

Short optimization paths lead to good generalization

no code implementations29 Sep 2021 Fusheng Liu, Haizhao Yang, Qianxiao Li

Through our approach, we show that, with a proper initialization, gradient flow converges following a short path with an explicit length estimate.

BIG-bench Machine Learning regression

Stationary Density Estimation of Itô Diffusions Using Deep Learning

no code implementations9 Sep 2021 Yiqi Gu, John Harlim, Senwei Liang, Haizhao Yang

In this paper, we consider the density estimation problem associated with the stationary measure of ergodic It\^o diffusions from a discrete-time series that approximate the solutions of the stochastic differential equations.

Density Estimation regression +2

Blending Pruning Criteria for Convolutional Neural Networks

no code implementations11 Jul 2021 wei he, Zhongzhan Huang, Mingfu Liang, Senwei Liang, Haizhao Yang

One filter could be important according to a certain criterion, while it is unnecessary according to another one, which indicates that each criterion is only a partial view of the comprehensive "importance".

Clustering Network Pruning

Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons

no code implementations6 Jul 2021 Zuowei Shen, Haizhao Yang, Shijun Zhang

This paper develops simple feed-forward neural networks that achieve the universal approximation property for all continuous functions with a fixed finite number of neurons.

Solving PDEs on Unknown Manifolds with Machine Learning

1 code implementation12 Jun 2021 Senwei Liang, Shixiao W. Jiang, John Harlim, Haizhao Yang

In a well-posed elliptic PDE setting, when the hypothesis space consists of neural networks with either infinite width or depth, we show that the global minimizer of the empirical loss function is a consistent solution in the limit of large training data.

BIG-bench Machine Learning Learning Theory

The Discovery of Dynamics via Linear Multistep Methods and Deep Learning: Error Estimation

1 code implementation21 Mar 2021 Qiang Du, Yiqi Gu, Haizhao Yang, Chao Zhou

We put forward error estimates for these methods using the approximation property of deep networks.

Optimal Approximation Rate of ReLU Networks in terms of Width and Depth

no code implementations28 Feb 2021 Zuowei Shen, Haizhao Yang, Shijun Zhang

This paper concentrates on the approximation power of deep feed-forward neural networks in terms of width and depth.

Reproducing Activation Function for Deep Learning

no code implementations13 Jan 2021 Senwei Liang, Liyao Lyu, Chunmei Wang, Haizhao Yang

We propose reproducing activation functions (RAFs) to improve deep learning accuracy for various applications ranging from computer vision to scientific computing.

Image Reconstruction Video Reconstruction

Friedrichs Learning: Weak Solutions of Partial Differential Equations via Deep Learning

no code implementations15 Dec 2020 Fan Chen, Jianguo Huang, Chunmei Wang, Haizhao Yang

This paper proposes Friedrichs learning as a novel deep learning methodology that can learn the weak solutions of PDEs via a minmax formulation, which transforms the PDE problem into a minimax optimization problem to identify weak solutions.

Efficient Attention Network: Accelerate Attention by Searching Where to Plug

1 code implementation28 Nov 2020 Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang

Recently, many plug-and-play self-attention modules are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).

Neural Network Approximation: Three Hidden Layers Are Enough

no code implementations25 Oct 2020 Zuowei Shen, Haizhao Yang, Shijun Zhang

A three-hidden-layer neural network with super approximation power is introduced.

Two-Layer Neural Networks for Partial Differential Equations: Optimization and Generalization Theory

no code implementations28 Jun 2020 Tao Luo, Haizhao Yang

The problem of solving partial differential equations (PDEs) can be formulated into a least-squares minimization problem, where neural networks are used to parametrize PDE solutions.

Deep Network with Approximation Error Being Reciprocal of Width to Power of Square Root of Depth

no code implementations22 Jun 2020 Zuowei Shen, Haizhao Yang, Shijun Zhang

More generally for an arbitrary continuous function $f$ on $[0, 1]^d$ with a modulus of continuity $\omega_f(\cdot)$, the constructive approximation rate is $\omega_f(\sqrt{d}\, N^{-\sqrt{L}})+2\omega_f(\sqrt{d}){N^{-\sqrt{L}}}$.

Deep Network Approximation for Smooth Functions

no code implementations9 Jan 2020 Jianfeng Lu, Zuowei Shen, Haizhao Yang, Shijun Zhang

This paper establishes the (nearly) optimal approximation error characterization of deep rectified linear unit (ReLU) networks for smooth functions in terms of both width and depth simultaneously.

valid

Machine Learning for Prediction with Missing Dynamics

no code implementations13 Oct 2019 John Harlim, Shixiao W. Jiang, Senwei Liang, Haizhao Yang

This article presents a general framework for recovering missing dynamical systems using available data and machine learning techniques.

BIG-bench Machine Learning

Instance Enhancement Batch Normalization: an Adaptive Regulator of Batch Noise

2 code implementations12 Aug 2019 Senwei Liang, Zhongzhan Huang, Mingfu Liang, Haizhao Yang

Batch Normalization (BN)(Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and hence BN will bring the noise to the gradient of the training loss.

Image Classification

Error bounds for deep ReLU networks using the Kolmogorov--Arnold superposition theorem

no code implementations27 Jun 2019 Hadrien Montanelli, Haizhao Yang

We prove a theorem concerning the approximation of multivariate functions by deep ReLU networks, for which the curse of the dimensionality is lessened.

Deep Network Approximation Characterized by Number of Neurons

no code implementations13 Jun 2019 Zuowei Shen, Haizhao Yang, Shijun Zhang

This paper quantitatively characterizes the approximation power of deep feed-forward neural networks (FNNs) in terms of the number of neurons.

DIANet: Dense-and-Implicit Attention Network

3 code implementations25 May 2019 Zhongzhan Huang, Senwei Liang, Mingfu Liang, Haizhao Yang

Attention networks have successfully boosted the performance in various vision problems.

Ranked #139 on Image Classification on CIFAR-100 (using extra training data)

Image Classification

CASS: Cross Adversarial Source Separation via Autoencoder

no code implementations23 May 2019 Yong Zheng Ong, Charles K. Chui, Haizhao Yang

This paper introduces a cross adversarial source separation (CASS) framework via autoencoder, a new model that aims at separating an input signal consisting of a mixture of multiple components into individual components defined via adversarial learning and autoencoder fitting.

Dimensionality Reduction

SelectNet: Learning to Sample from the Wild for Imbalanced Data Training

no code implementations23 May 2019 Yunru Liu, Tingran Gao, Haizhao Yang

Supervised learning from training data with imbalanced class sizes, a commonly encountered scenario in real applications such as anomaly/fraud detection, has long been considered a significant challenge in machine learning.

Fraud Detection

Generative Imaging and Image Processing via Generative Encoder

no code implementations23 May 2019 Lin Chen, Haizhao Yang

This paper introduces a novel generative encoder (GE) model for generative imaging and image processing with applications in compressed sensing and imaging, image compression, denoising, inpainting, deblurring, and super-resolution.

Deblurring Denoising +3

Nonlinear Approximation via Compositions

no code implementations26 Feb 2019 Zuowei Shen, Haizhao Yang, Shijun Zhang

In particular, for any function $f$ on $[0, 1]$, regardless of its smoothness and even the continuity, if $f$ can be approximated using a dictionary when $L=1$ with the best $N$-term approximation rate $\varepsilon_{L, f}={\cal O}(N^{-\eta})$, we show that dictionaries with $L=2$ can improve the best $N$-term approximation rate to $\varepsilon_{L, f}={\cal O}(N^{-2\eta})$.

Computational Efficiency

PiPs: a Kernel-based Optimization Scheme for Analyzing Non-Stationary 1D Signals

no code implementations21 May 2018 Jieren Xu, Yitong Li, Haizhao Yang, David Dunson, Ingrid Daubechies

This paper proposes a novel kernel-based optimization scheme to handle tasks in the analysis, e. g., signal spectral estimation and single-channel source separation of 1D non-stationary oscillatory data.

regression Super-Resolution

Recursive Diffeomorphism-Based Regression for Shape Functions

1 code implementation12 Oct 2016 Jieren Xu, Haizhao Yang, Ingrid Daubechies

This paper proposes a recursive diffeomorphism based regression method for one-dimensional generalized mode decomposition problem that aims at extracting generalized modes $\alpha_k(t)s_k(2\pi N_k\phi_k(t))$ from their superposition $\sum_{k=1}^K \alpha_k(t)s_k(2\pi N_k\phi_k(t))$.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.