Search Results for author: Xiaoming Huo

Found 22 papers, 2 papers with code

Approximation of RKHS Functionals by Neural Networks

no code implementations18 Mar 2024 Tian-Yi Zhou, Namjoon Suh, Guang Cheng, Xiaoming Huo

Motivated by the abundance of functional data such as time series and images, there has been a growing interest in integrating such data into neural networks and learning maps from function spaces to R (i. e., functionals).

regression Time Series

Asymptotic Behavior of Adversarial Training Estimator under $\ell_\infty$-Perturbation

no code implementations27 Jan 2024 Yiling Xie, Xiaoming Huo

Alternatively, a two-step procedure is proposed -- adaptive adversarial training, which could further improve the performance of adversarial training under $\ell_\infty$-perturbation.

Variable Selection

Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes

no code implementations8 Jan 2024 Hyunouk Ko, Xiaoming Huo

In this paper, we prove the universal consistency of wide and deep ReLU neural network classifiers trained on the logistic loss.

On Excess Risk Convergence Rates of Neural Network Classifiers

no code implementations26 Sep 2023 Hyunouk Ko, Namjoon Suh, Xiaoming Huo

The recent success of neural networks in pattern recognition and classification problems suggests that neural networks possess qualities distinct from other more classical classifiers such as SVMs or boosting classifiers.

Binary Classification

Classification of Data Generated by Gaussian Mixture Models Using Deep ReLU Networks

no code implementations15 Aug 2023 Tian-Yi Zhou, Xiaoming Huo

This paper studies the binary classification of unbounded data from ${\mathbb R}^d$ generated under Gaussian Mixture Models (GMMs) using deep ReLU neural networks.

Binary Classification Classification

Conformalization of Sparse Generalized Linear Models

1 code implementation11 Jul 2023 Etash Kumar Guha, Eugene Ndiaye, Xiaoming Huo

Given a sequence of observable variables $\{(x_1, y_1), \ldots, (x_n, y_n)\}$, the conformal prediction method estimates a confidence set for $y_{n+1}$ given $x_{n+1}$ that is valid for any finite sample size by merely assuming that the joint distribution of the data is permutation invariant.

Conformal Prediction valid

Generalization Bounds for Magnitude-Based Pruning via Sparse Matrix Sketching

no code implementations30 May 2023 Etash Kumar Guha, Prasanjit Dubey, Xiaoming Huo

In this paper, we derive a novel bound on the generalization error of Magnitude-Based pruning of overparameterized neural networks.

Generalization Bounds

Adjusted Wasserstein Distributionally Robust Estimator in Statistical Learning

no code implementations27 Mar 2023 Yiling Xie, Xiaoming Huo

We propose an adjusted Wasserstein distributionally robust estimator -- based on a nonlinear transformation of the Wasserstein distributionally robust (WDRO) estimator in statistical learning.

regression

A Survey of Numerical Algorithms that can Solve the Lasso Problems

no code implementations7 Mar 2023 Yujie Zhao, Xiaoming Huo

In statistics, the least absolute shrinkage and selection operator (Lasso) is a regression method that performs both variable selection and regularization.

regression Variable Selection

Improved Rate of First Order Algorithms for Entropic Optimal Transport

no code implementations23 Jan 2023 Yiling Luo, Yiling Xie, Xiaoming Huo

To compare, we prove that the computational complexity of the Stochastic Sinkhorn algorithm is $\widetilde{{O}}({n^2}/{\epsilon^2})$, which is slower than our accelerated primal-dual stochastic mirror algorithm.

Covariance Estimators for the ROOT-SGD Algorithm in Online Learning

no code implementations2 Dec 2022 Yiling Luo, Xiaoming Huo, Yajun Mei

Our second estimator is a Hessian-free estimator that overcomes the aforementioned limitation.

Solving a Special Type of Optimal Transport Problem by a Modified Hungarian Algorithm

no code implementations29 Oct 2022 Yiling Xie, Yiling Luo, Xiaoming Huo

Computing the empirical Wasserstein distance in the independence test requires solving this special type of OT problem, where $m=n^2$.

Learning Ability of Interpolating Deep Convolutional Neural Networks

no code implementations25 Oct 2022 Tian-Yi Zhou, Xiaoming Huo

It is frequently observed that overparameterized neural networks generalize well.

Implicit Regularization Properties of Variance Reduced Stochastic Mirror Descent

no code implementations29 Apr 2022 Yiling Luo, Xiaoming Huo, Yajun Mei

On the other hand, algorithms such as gradient descent and stochastic gradient descent have the implicit regularization property that leads to better performance in terms of the generalization errors.

The Directional Bias Helps Stochastic Gradient Descent to Generalize in Kernel Regression Models

no code implementations29 Apr 2022 Yiling Luo, Xiaoming Huo, Yajun Mei

In addition, the Gradient Descent (GD) with a moderate or small step-size converges along the direction that corresponds to the smallest eigenvalue.

regression

An Accelerated Stochastic Algorithm for Solving the Optimal Transport Problem

1 code implementation2 Mar 2022 Yiling Xie, Yiling Luo, Xiaoming Huo

A primal-dual accelerated stochastic gradient descent with variance reduction algorithm (PDASGD) is proposed to solve linear-constrained optimization problems.

Directional Bias Helps Stochastic Gradient Descent to Generalize in Nonparametric Model

no code implementations29 Sep 2021 Yiling Luo, Xiaoming Huo, Yajun Mei

This paper studies the Stochastic Gradient Descent (SGD) algorithm in kernel regression.

regression

Generalization of Overparametrized Deep Neural Network Under Noisy Observations

no code implementations ICLR 2022 Namjoon Suh, Hyunouk Ko, Xiaoming Huo

We study the generalization properties of the overparameterized deep neural network (DNN) with Rectified Linear Unit (ReLU) activations.

Asymptotic Theory of $\ell_1$-Regularized PDE Identification from a Single Noisy Trajectory

no code implementations12 Mar 2021 Yuchen He, Namjoon Suh, Xiaoming Huo, Sungha Kang, Yajun Mei

We provide a set of sufficient conditions which guarantee that, from a single trajectory data denoised by a Local-Polynomial filter, the support of $\mathbf{c}(\lambda)$ asymptotically converges to the true signed-support associated with the underlying PDE for sufficiently many data and a certain range of $\lambda$.

Accelerate the Warm-up Stage in the Lasso Computation via a Homotopic Approach

no code implementations26 Oct 2020 Yujie Zhao, Xiaoming Huo

At the same time, each surrogate function is strictly convex, which enables a provable faster numerical rate of convergence.

Factor Analysis on Citation, Using a Combined Latent and Logistic Regression Model

no code implementations2 Dec 2019 Namjoon Suh, Xiaoming Huo, Eric Heim, Lee Seversky

We propose a combined model, which integrates the latent factor model and the logistic regression model, for the citation network.

regression

A Distributed One-Step Estimator

no code implementations4 Nov 2015 Cheng Huang, Xiaoming Huo

A potential application of the one-step approach is that one can use multiple machines to speed up large scale statistical inference with little compromise in the quality of estimators.

Cannot find the paper you are looking for? You can Submit a new open access paper.