You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 22 Oct 2020 • Gabor Lugosi, Shahar Mendelson

We consider the problem of estimating the mean of a random vector based on $N$ independent, identically distributed observations.

no code implementations • 4 Feb 2020 • Shahar Mendelson

We study learning problems in which the underlying class is a bounded subset of $L_p$ and the target $Y$ belongs to $L_p$.

no code implementations • 10 Jun 2019 • Gabor Lugosi, Shahar Mendelson

We dedicate a section on statistical learning problems--in particular, regression function estimation--in the presence of possibly heavy-tailed data.

no code implementations • 15 Apr 2018 • Shahar Mendelson

The slabs are generated using $X_1,..., X_N$, and under minimal assumptions on $X$ (e. g., $X$ can be heavy-tailed) it suffices that $N = c_1d \eta^{-4}\log(2/\eta)$ to ensure that $(1-\eta) {\cal K} \subset {\cal B} \subset (1+\eta){\cal K}$.

no code implementations • 4 Sep 2017 • Shahar Mendelson

The small-ball method was introduced as a way of obtaining a high probability, isomorphic lower bound on the quadratic empirical process, under weak assumptions on the indexing class.

no code implementations • 17 Jul 2017 • Shahar Mendelson

We study learning problems involving arbitrary classes of functions $F$, distributions $X$ and targets $Y$.

no code implementations • 21 Feb 2017 • Shahar Mendelson

In this note we answer a question of G. Lecu\'{e}, by showing that column normalization of a random matrix with iid entries need not lead to good sparse recovery properties, even if the generating random variable has a reasonable moment growth.

no code implementations • 1 Feb 2017 • Gábor Lugosi, Shahar Mendelson

We study the problem of estimating the mean of a random vector $X$ given a sample of $N$ independent, identically distributed points.

no code implementations • 15 Jan 2017 • Gábor Lugosi, Shahar Mendelson

A regularized risk minimization procedure for regression function estimation is introduced that achieves near optimal accuracy and confidence under general conditions, including heavy-tailed predictor and response variables.

no code implementations • 9 Apr 2015 • Shahar Mendelson

We show that if $F$ is a convex class of functions that is $L$-subgaussian, the error rate of learning problems generated by independent noise is equivalent to a fixed point determined by `local' covering estimates of the class, rather than by the gaussian averages.

no code implementations • 25 Feb 2015 • Shahar Mendelson

We introduce an alternative to the notion of `fast rate' in Learning Theory, which coincides with the optimal error rate when the given class happens to be convex and regular in some sense.

no code implementations • 13 Oct 2014 • Shahar Mendelson

We study prediction and estimation problems using empirical risk minimization, relative to a general convex loss function.

no code implementations • 1 Jan 2014 • Shahar Mendelson

We obtain sharp bounds on the performance of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without assuming that class members and the target are bounded functions or have rapidly decaying tails.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.