Search Results for author: Yue M. Lu

Found 22 papers, 3 papers with code

Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime

no code implementations13 May 2022 Hong Hu, Yue M. Lu

The generalization performance of kernel ridge regression (KRR) exhibits a multi-phased pattern that crucially depends on the scaling relationship between the sample size $n$ and the underlying dimension $d$.

An Equivalence Principle for the Spectrum of Random Inner-Product Kernel Matrices

no code implementations12 May 2022 Yue M. Lu, Horng-Tzer Yau

The main insight of our work is a general equivalence principle: the spectrum of the random kernel matrix is asymptotically equivalent to that of a simpler matrix model, constructed as the linear combination of a (shifted) Wishart matrix and an independent matrix drawn from the Gaussian orthogonal ensemble.

Analysis of Random Sequential Message Passing Algorithms for Approximate Inference

no code implementations16 Feb 2022 Burak Çakmak, Yue M. Lu, Manfred Opper

We analyze the dynamics of a random sequential message passing algorithm for approximate inference with large Gaussian latent variable models in a student-teacher scenario.

On the Inherent Regularization Effects of Noise Injection During Training

no code implementations15 Feb 2021 Oussama Dhifallah, Yue M. Lu

Randomly perturbing networks during the training process is a commonly used approach to improving generalization performance.

Householder Dice: A Matrix-Free Algorithm for Simulating Dynamics on Gaussian and Random Orthogonal Ensembles

1 code implementation19 Jan 2021 Yue M. Lu

This paper proposes a new algorithm, named Householder Dice (HD), for simulating dynamics on dense random matrix ensembles with translation-invariant properties.

Phase Transitions in Transfer Learning for High-Dimensional Perceptrons

no code implementations6 Jan 2021 Oussama Dhifallah, Yue M. Lu

Transfer learning seeks to improve the generalization performance of a target task by exploiting the knowledge learned from a related source task.

Transfer Learning

Construction of optimal spectral methods in phase retrieval

1 code implementation8 Dec 2020 Antoine Maillard, Florent Krzakala, Yue M. Lu, Lenka Zdeborová

We consider the phase retrieval problem, in which the observer wishes to recover a $n$-dimensional real or complex signal $\mathbf{X}^\star$ from the (possibly noisy) observation of $|\mathbf{\Phi} \mathbf{X}^\star|$, in which $\mathbf{\Phi}$ is a matrix of size $m \times n$.

Information Theory Disordered Systems and Neural Networks Information Theory

Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization

no code implementations NeurIPS 2020 Benjamin Aubin, Florent Krzakala, Yue M. Lu, Lenka Zdeborová

We consider a commonly studied supervised classification of a synthetic dataset whose labels are generated by feeding a one-layer neural network with random iid inputs.

Generalized Approximate Survey Propagation for High-Dimensional Estimation

no code implementations13 May 2019 Luca Saglietti, Yue M. Lu, Carlo Lucibello

In Generalized Linear Estimation (GLE) problems, we seek to estimate a signal that is observed through a linear transform followed by a component-wise, possibly nonlinear and noisy, channel.

SLOPE for Sparse Linear Regression:Asymptotics and Optimal Regularization

1 code implementation27 Mar 2019 Hong Hu, Yue M. Lu

In sparse linear regression, the SLOPE estimator generalizes LASSO by penalizing different coordinates of the estimate according to their magnitudes.

Information Theory Information Theory Statistics Theory Statistics Theory

Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview

no code implementations25 Sep 2018 Yuejie Chi, Yue M. Lu, Yuxin Chen

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization.

Matrix Completion

MMSE Approximation For Sparse Coding Algorithms Using Stochastic Resonance

no code implementations26 Jun 2018 Dror Simon, Jeremias Sulam, Yaniv Romano, Yue M. Lu, Michael Elad

The proposed method adds controlled noise to the input and estimates a sparse representation from the perturbed signal.

Streaming PCA and Subspace Tracking: The Missing Data Case

no code implementations12 Jun 2018 Laura Balzano, Yuejie Chi, Yue M. Lu

This survey article reviews a variety of classical and recent algorithms for solving this problem with low computational and memory complexities, particularly those applicable in the big data regime with missing data.

Decision Making

A Solvable High-Dimensional Model of GAN

no code implementations NeurIPS 2019 Chuang Wang, Hong Hu, Yue M. Lu

We present a theoretical analysis of the training process for a single-layer GAN fed by high-dimensional input data.

Subspace Estimation from Incomplete Observations: A High-Dimensional Analysis

no code implementations17 May 2018 Chuang Wang, Yonina C. Eldar, Yue M. Lu

In addition to providing asymptotically exact predictions of the dynamic performance of the algorithms, our high-dimensional analysis yields several insights, including an asymptotic equivalence between Oja's method and GROUSE, and a precise scaling relationship linking the amount of missing data to the signal-to-noise ratio.

Scaling Limit: Exact and Tractable Analysis of Online Learning Algorithms with Applications to Regularized Regression and PCA

no code implementations8 Dec 2017 Chuang Wang, Jonathan Mattingly, Yue M. Lu

In addition to characterizing the dynamic performance of online learning algorithms, our asymptotic analysis also provides useful insights.

online learning

The Scaling Limit of High-Dimensional Online Independent Component Analysis

no code implementations NeurIPS 2017 Chuang Wang, Yue M. Lu

As the ambient dimension tends to infinity, and with proper time scaling, we show that the time-varying joint empirical measure of the target feature vector and the estimates provided by the algorithm will converge weakly to a deterministic measured-valued process that can be characterized as the unique solution of a nonlinear PDE.

Phase Transitions of Spectral Initialization for High-Dimensional Nonconvex Estimation

no code implementations21 Feb 2017 Yue M. Lu, Gen Li

We study a spectral initialization method that serves a key role in recent work on estimating signals in nonconvex settings.

Understanding Symmetric Smoothing Filters: A Gaussian Mixture Model Perspective

no code implementations1 Jan 2016 Stanley H. Chan, Todd Zickler, Yue M. Lu

We show that Sinkhorn-Knopp is equivalent to an Expectation-Maximization (EM) algorithm of learning a Gaussian mixture model of the image patches.

Image Denoising Unity

Monte Carlo non local means: Random sampling for large-scale image filtering

no code implementations27 Dec 2013 Stanley H. Chan, Todd Zickler, Yue M. Lu

In particular, our error probability bounds show that, at any given sampling ratio, the probability for MCNLM to have a large deviation from the original NLM solution decays exponentially as the size of the image or database grows.

Image Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.