Search Results for author: Shahar Mendelson

Found 14 papers, 0 papers with code

Fitting an ellipsoid to a quadratic number of random points

no code implementations3 Jul 2023 Afonso S. Bandeira, Antoine Maillard, Shahar Mendelson, Elliot Paquette

We consider the problem $(\mathrm{P})$ of fitting $n$ standard Gaussian random vectors in $\mathbb{R}^d$ to the boundary of a centered ellipsoid, as $n, d \to \infty$.

Multivariate mean estimation with direction-dependent accuracy

no code implementations22 Oct 2020 Gabor Lugosi, Shahar Mendelson

We consider the problem of estimating the mean of a random vector based on $N$ independent, identically distributed observations.

Learning bounded subsets of $L_p$

no code implementations4 Feb 2020 Shahar Mendelson

We study learning problems in which the underlying class is a bounded subset of $L_p$ and the target $Y$ belongs to $L_p$.

Mean estimation and regression under heavy-tailed distributions--a survey

no code implementations10 Jun 2019 Gabor Lugosi, Shahar Mendelson

We dedicate a section on statistical learning problems--in particular, regression function estimation--in the presence of possibly heavy-tailed data.

regression

Approximating the covariance ellipsoid

no code implementations15 Apr 2018 Shahar Mendelson

The slabs are generated using $X_1,..., X_N$, and under minimal assumptions on $X$ (e. g., $X$ can be heavy-tailed) it suffices that $N = c_1d \eta^{-4}\log(2/\eta)$ to ensure that $(1-\eta) {\cal K} \subset {\cal B} \subset (1+\eta){\cal K}$.

Extending the scope of the small-ball method

no code implementations4 Sep 2017 Shahar Mendelson

The small-ball method was introduced as a way of obtaining a high probability, isomorphic lower bound on the quadratic empirical process, under weak assumptions on the indexing class.

An optimal unrestricted learning procedure

no code implementations17 Jul 2017 Shahar Mendelson

We study learning problems involving arbitrary classes of functions $F$, distributions $X$ and targets $Y$.

Column normalization of a random measurement matrix

no code implementations21 Feb 2017 Shahar Mendelson

In this note we answer a question of G. Lecu\'{e}, by showing that column normalization of a random matrix with iid entries need not lead to good sparse recovery properties, even if the generating random variable has a reasonable moment growth.

Sub-Gaussian estimators of the mean of a random vector

no code implementations1 Feb 2017 Gábor Lugosi, Shahar Mendelson

We study the problem of estimating the mean of a random vector $X$ given a sample of $N$ independent, identically distributed points.

Regularization, sparse recovery, and median-of-means tournaments

no code implementations15 Jan 2017 Gábor Lugosi, Shahar Mendelson

A regularized risk minimization procedure for regression function estimation is introduced that achieves near optimal accuracy and confidence under general conditions, including heavy-tailed predictor and response variables.

regression

`local' vs. `global' parameters -- breaking the gaussian complexity barrier

no code implementations9 Apr 2015 Shahar Mendelson

We show that if $F$ is a convex class of functions that is $L$-subgaussian, the error rate of learning problems generated by independent noise is equivalent to a fixed point determined by `local' covering estimates of the class, rather than by the gaussian averages.

On aggregation for heavy-tailed classes

no code implementations25 Feb 2015 Shahar Mendelson

We introduce an alternative to the notion of `fast rate' in Learning Theory, which coincides with the optimal error rate when the given class happens to be convex and regular in some sense.

Learning Theory

Learning without Concentration for General Loss Functions

no code implementations13 Oct 2014 Shahar Mendelson

We study prediction and estimation problems using empirical risk minimization, relative to a general convex loss function.

Learning without Concentration

no code implementations1 Jan 2014 Shahar Mendelson

We obtain sharp bounds on the performance of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without assuming that class members and the target are bounded functions or have rapidly decaying tails.

Cannot find the paper you are looking for? You can Submit a new open access paper.