Search Results for author: Cosme Louart

Found 6 papers, 1 papers with code

Spectral properties of sample covariance matrices arising from random matrices with independent non identically distributed columns

no code implementations6 Sep 2021 Cosme Louart, Romain Couillet

Given a random matrix $X= (x_1,\ldots, x_n)\in \mathcal M_{p, n}$ with independent columns and satisfying concentration of measure hypotheses and a parameter $z$ whose distance to the spectrum of $\frac{1}{n} XX^T$ should not depend on $p, n$, it was previously shown that the functionals $\text{tr}(AR(z))$, for $R(z) = (\frac{1}{n}XX^T- zI_p)^{-1}$ and $A\in \mathcal M_{p}$ deterministic, have a standard deviation of order $O(\|A\|_* / \sqrt n)$.

Concentration of measure and generalized product of random vectors with an application to Hanson-Wright-like inequalities

no code implementations16 Feb 2021 Cosme Louart, Romain Couillet

Starting from concentration of measure hypotheses on $m$ random vectors $Z_1,\ldots, Z_m$, this article provides an expression of the concentration of functionals $\phi(Z_1,\ldots, Z_m)$ where the variations of $\phi$ on each variable depend on the product of the norms (or semi-norms) of the other variables (as if $\phi$ were a product).

BIG-bench Machine Learning

A Concentration of Measure Framework to study convex problems and other implicit formulation problems in machine learning

no code implementations19 Oct 2020 Cosme Louart

This paper provides a framework to show the concentration of solutions $Y^*$ to convex minimizing problem where the objective function $\phi(X)(Y)$ depends on some random vector $X$ satisfying concentration of measure hypotheses.

regression

A Concentration of Measure and Random Matrix Approach to Large Dimensional Robust Statistics

no code implementations17 Jun 2020 Cosme Louart, Romain Couillet

This article studies the \emph{robust covariance matrix estimation} of a data collection $X = (x_1,\ldots, x_n)$ with $x_i = \sqrt \tau_i z_i + m$, where $z_i \in \mathbb R^p$ is a \textit{concentrated vector} (e. g., an elliptical random vector), $m\in \mathbb R^p$ a deterministic signal and $\tau_i\in \mathbb R$ a scalar perturbation of possibly large amplitude, under the assumption where both $n$ and $p$ are large.

Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures

no code implementations ICML 2020 Mohamed El Amine Seddik, Cosme Louart, Mohamed Tamaazousti, Romain Couillet

This paper shows that deep learning (DL) representations of data produced by generative adversarial nets (GANs) are random vectors which fall within the class of so-called \textit{concentrated} random vectors.

A Random Matrix Approach to Neural Networks

1 code implementation17 Feb 2017 Cosme Louart, Zhenyu Liao, Romain Couillet

This article studies the Gram random matrix model $G=\frac1T\Sigma^{\rm T}\Sigma$, $\Sigma=\sigma(WX)$, classically found in the analysis of random feature maps and random neural networks, where $X=[x_1,\ldots, x_T]\in{\mathbb R}^{p\times T}$ is a (data) matrix of bounded norm, $W\in{\mathbb R}^{n\times p}$ is a matrix of independent zero-mean unit variance entries, and $\sigma:{\mathbb R}\to{\mathbb R}$ is a Lipschitz continuous (activation) function --- $\sigma(WX)$ being understood entry-wise.

LEMMA

Cannot find the paper you are looking for? You can Submit a new open access paper.