no code implementations • 6 May 2022 • Gil Kur, Eli Putterman
We present the first computationally efficient minimax optimal (up to logarithmic factors) estimators for the tasks of (i) $L$-Lipschitz convex regression (ii) $\Gamma$-bounded convex regression under polytopal support.
no code implementations • 24 Feb 2021 • Gil Kur, Alexander Rakhlin
We study the minimal error of the Empirical Risk Minimization (ERM) procedure in the task of regression, both in the random and the fixed design settings.
1 code implementation • 7 Dec 2020 • Yuval Dagan, Gil Kur
We present an asymptotically optimal $(\epsilon,\delta)$ differentially private mechanism for answering multiple, adaptively asked, $\Delta$-sensitive queries, settling the conjecture of Steinke and Ullman [2020].
no code implementations • 7 Jun 2020 • Gil Kur, Alexander Rakhlin, Adityanand Guntuboyina
We develop a technique for establishing lower bounds on the sample complexity of Least Squares (or, Empirical Risk Minimization) for large classes of functions.
no code implementations • 3 Jun 2020 • Gil Kur, Fuchang Gao, Adityanand Guntuboyina, Bodhisattva Sen
The least squares estimator (LSE) is shown to be suboptimal in squared error loss in the usual nonparametric regression model with Gaussian errors for $d \geq 5$ for each of the following families of functions: (i) convex functions supported on a polytope (in fixed design), (ii) bounded convex functions supported on a polytope (in random design), and (iii) convex Lipschitz functions supported on any convex domain (in random design).
no code implementations • 12 Dec 2019 • Tomaso Poggio, Gil Kur, Andrzej Banburski
In solving a system of $n$ linear equations in $d$ variables $Ax=b$, the condition number of the $n, d$ matrix $A$ measures how much errors in the data $b$ affect the solution $x$.
no code implementations • 13 Mar 2019 • Gil Kur, Yuval Dagan, Alexander Rakhlin
In this paper, we study two problems: (1) estimation of a $d$-dimensional log-concave distribution and (2) bounded multivariate convex regression with random design with an underlying log-concave density or a compactly supported distribution with a continuous density.
no code implementations • 9 Feb 2019 • Yuval Dagan, Gil Kur, Ohad Shamir
We show that fundamental learning tasks, such as finding an approximate linear separator or linear regression, require memory at least \emph{quadratic} in the dimension, in a natural streaming setting.