no code implementations • ICML 2020 • Akshay Kamath, Eric Price, Sushrut Karmalkar
for compressed sensing from $L$-Lipschitz generative models $G$.
no code implementations • ICML 2020 • Akshay Kamath, Eric Price, Sushrut Karmalkar
for compressed sensing from $L$-Lipschitz generative models $G$.
no code implementations • 19 Aug 2024 • Eric Price, Zhiyang Xun
(2) In the insertion-only model, a variant of Oja's algorithm gets $o(1)$ error for $R = O(\log n \log d)$.
no code implementations • 13 Apr 2024 • Eric Price, Aamir Ahmad
Using UAVs for wildlife observation and motion capture offers manifold advantages for studying animals in the wild, especially grazing herds in open terrain.
no code implementations • 20 Feb 2024 • Shivam Gupta, Ajil Jalal, Aditya Parulekar, Eric Price, Zhiyang Xun
Diffusion models are a remarkably effective way of learning and sampling from a distribution $p(x)$.
no code implementations • 23 Nov 2023 • Shivam Gupta, Aditya Parulekar, Eric Price, Zhiyang Xun
Diffusion models have become the most popular approach to deep generative modeling of images, largely due to their empirical performance and reliability.
no code implementations • 1 Nov 2023 • Lucas Gretta, Eric Price
We revisit the noisy binary search model of Karp and Kleinberg, in which we have $n$ coins with unknown probabilities $p_i$ that we can flip.
no code implementations • NeurIPS 2023 • Eric Price, Yihan Zhou
For some hypothesis classes and input distributions, active agnostic learning needs exponentially fewer samples than passive learning; for other classes and distributions, it offers little to no improvement.
no code implementations • 28 Jun 2023 • Shivam Gupta, Jasper C. H. Lee, Eric Price
The mean of an unknown variance-$\sigma^2$ distribution $f$ can be estimated from $n$ samples with variance $\frac{\sigma^2}{n}$ and nearly corresponding subgaussian rate.
1 code implementation • 19 Feb 2023 • Eric Price, Aamir Ahmad
In this paper, we propose a new annotation method which leverages a combination of a learning-based detector (SSD) and a learning-based tracker (RE$^3$).
no code implementations • 5 Feb 2023 • Shivam Gupta, Jasper C. H. Lee, Eric Price
In location estimation, we are given $n$ samples from a known distribution $f$ shifted by an unknown translation $\lambda$, and want to estimate $\lambda$ as precisely as possible.
no code implementations • 29 Jun 2022 • Eric Price, Sandeep Silwal, Samson Zhou
We further show fine-grained hardness of robust regression through a reduction from the minimum-weight $k$-clique conjecture.
no code implementations • 21 Jun 2022 • Shivam Gupta, Eric Price
It is known that the optimal sample complexity to distinguish the uniform distribution on $m$ elements from any $\epsilon$-far distribution with $1-\delta$ probability is $n = \Theta\left(\frac{\sqrt{m \log (1/\delta)}}{\epsilon^2} + \frac{\log (1/\delta)}{\epsilon^2}\right)$, which is achieved by the empirical TV tester.
no code implementations • 6 Jun 2022 • Shivam Gupta, Jasper C. H. Lee, Eric Price, Paul Valiant
We consider 1-dimensional location estimation, where we estimate a parameter $\lambda$ from $n$ samples $\lambda + \eta_i$, with each $\eta_i$ drawn i. i. d.
no code implementations • 6 Mar 2022 • Alaa Maalouf, Murad Tukan, Eric Price, Daniel Kane, Dan Feldman
The goal (e. g., for anomaly detection) is to approximate the $n$ points received so far in $P$ by a single frequency $\sin$, e. g. $\min_{c\in C}cost(P, c)+\lambda(c)$, where $cost(P, c)=\sum_{i=1}^n \sin^2(\frac{2\pi}{N} p_ic)$, $C\subseteq [N]$ is a feasible set of solutions, and $\lambda$ is a given regularization function.
1 code implementation • 20 Jan 2022 • Nitin Saini, Elia Bonetto, Eric Price, Aamir Ahmad, Michael J. Black
In this letter, we present a novel markerless 3D human motion capture (MoCap) system for unstructured, outdoor environments that uses a team of autonomous unmanned aerial vehicles (UAVs) with on-board RGB cameras and computation.
no code implementations • NeurIPS Workshop Deep_Invers 2021 • Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alex Dimakis, Jonathan Tamir
The CSGM framework (Bora-Jalal-Price-Dimakis'17) has shown that deep generative priors can be powerful tools for solving inverse problems.
2 code implementations • NeurIPS 2021 • Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G. Dimakis, Jonathan I. Tamir
The CSGM framework (Bora-Jalal-Price-Dimakis'17) has shown that deep generative priors can be powerful tools for solving inverse problems.
1 code implementation • 23 Jun 2021 • Ajil Jalal, Sushrut Karmalkar, Jessica Hoffmann, Alexandros G. Dimakis, Eric Price
This motivates the introduction of definitions that allow algorithms to be \emph{oblivious} to the relevant groupings.
1 code implementation • 21 Jun 2021 • Ajil Jalal, Sushrut Karmalkar, Alexandros G. Dimakis, Eric Price
We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors).
no code implementations • 19 May 2021 • Aditya Parulekar, Advait Parulekar, Eric Price
We consider the problem of finding an approximate solution to $\ell_1$ regression while only observing a small number of labels.
no code implementations • 3 Mar 2021 • Shuo Yang, Tongzheng Ren, Sanjay Shakkottai, Eric Price, Inderjit S. Dhillon, Sujay Sanghavi
For sufficiently large $K$, our algorithms have sublinear per-step complexity and $\tilde O(\sqrt{T})$ regret.
no code implementations • 9 Nov 2020 • Arnab Bhattacharyya, Sutanu Gayen, Eric Price, N. V. Vinodchandran
For a distribution $P$ on $\Sigma^n$ and a tree $T$ on $n$ nodes, we say $T$ is an $\varepsilon$-approximate tree for $P$ if there is a $T$-structured distribution $Q$ such that $D(P\;||\;Q)$ is at most $\varepsilon$ more than the best possible tree-structured distribution for $P$.
no code implementations • 23 Oct 2020 • Ajil Jalal, Sushrut Karmalkar, Alex Dimakis, Eric Price
We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors).
no code implementations • 14 Sep 2020 • Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, John Peebles, Eric Price
To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Akshay Kamath, Sushrut Karmalkar, Eric Price
Second, we show that generative models generalize sparsity as a representation of structure.
3 code implementations • NeurIPS 2019 • Ilias Diakonikolas, Sushrut Karmalkar, Daniel Kane, Eric Price, Alistair Stewart
Specifically, we focus on the fundamental problems of robust sparse mean estimation and robust sparse PCA.
no code implementations • 15 Nov 2018 • Sébastien Bubeck, Yin Tat Lee, Eric Price, Ilya Razenshteyn
In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805. 10204) we argued that adversarial examples in machine learning might be due to an inherent computational hardness of the problem.
no code implementations • 21 Sep 2018 • Sushrut Karmalkar, Eric Price
We present a simple and effective algorithm for the problem of \emph{sparse robust linear regression}.
1 code implementation • 17 Jun 2018 • Dave Van Veen, Ajil Jalal, Mahdi Soltanolkotabi, Eric Price, Sriram Vishwanath, Alexandros G. Dimakis
We propose a novel method for compressed sensing recovery using untrained deep generative models.
no code implementations • 25 May 2018 • Sébastien Bubeck, Eric Price, Ilya Razenshteyn
First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples.
no code implementations • ICLR 2018 • Ashish Bora, Eric Price, Alexandros G. Dimakis
Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest.
no code implementations • 25 Dec 2017 • David Liau, Eric Price, Zhao Song, Ger Yang
We consider the stochastic bandit problem in the sublinear space setting, where one cannot record the win-loss record for all $K$ arms.
no code implementations • 27 Nov 2017 • Xue Chen, Eric Price
We present an approach that improves the sample complexity for a variety of curve fitting problems, including active learning for linear regression, polynomial regression, and continuous sparse Fourier transforms.
no code implementations • 10 Aug 2017 • Daniel Kane, Sushrut Karmalkar, Eric Price
We consider the problem of robust polynomial regression, where one receives samples $(x_i, y_i)$ that are usually within $\sigma$ of a polynomial $y = p(x)$, but have a $\rho$ chance of being arbitrary adversarial outliers.
no code implementations • 9 Aug 2017 • Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price
Our new upper and lower bounds show that the optimal sample complexity of identity testing is \[ \Theta\left( \frac{1}{\epsilon^2}\left(\sqrt{n \log(1/\delta)} + \log(1/\delta) \right)\right) \] for any $n, \varepsilon$, and $\delta$.
no code implementations • 30 May 2017 • Eric Price, Zhao Song, David P. Woodruff
Our main result is that, when $S$ is the subsampled randomized Fourier/Hadamard transform, the error $x' - x^*$ behaves as if it lies in a "random" direction within this bound: for any fixed direction $a\in \mathbb{R}^d$, we have with $1 - d^{-c}$ probability that \[ \langle a, x'-x^*\rangle \lesssim \frac{\|a\|_2\|x'-x^*\|_2}{d^{\frac{1}{2}-\gamma}}, \quad (1) \] where $c, \gamma > 0$ are arbitrary constants.
3 code implementations • ICML 2017 • Ashish Bora, Ajil Jalal, Eric Price, Alexandros G. Dimakis
The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain.
no code implementations • 11 Nov 2016 • Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price
We study the fundamental problems of (i) uniformity testing of a discrete distribution, and (ii) closeness testing between two discrete distributions with bounded $\ell_2$-norm.
1 code implementation • 2 Nov 2016 • Eric Price, Wojciech Zaremba, Ilya Sutskever
We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before).
7 code implementations • NeurIPS 2016 • Moritz Hardt, Eric Price, Nathan Srebro
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features.
no code implementations • 19 Feb 2015 • Xinyang Yi, Constantine Caramanis, Eric Price
Binary embedding is a nonlinear dimension reduction methodology where high dimensional data are embedded into the Hamming cube while preserving the structure of the original space.
Data Structures and Algorithms Information Theory Information Theory
no code implementations • 19 Apr 2014 • Moritz Hardt, Eric Price
Denoting by $\sigma^2$ the variance of the unknown mixture, we prove that $\Theta(\sigma^{12})$ samples are necessary and sufficient to estimate each parameter up to constant additive error when $d=1.$ Our upper bound extends to arbitrary dimension $d>1$ up to a (provably necessary) logarithmic loss in $d$ using a novel---yet simple---dimensionality reduction technique.
no code implementations • NeurIPS 2014 • Moritz Hardt, Eric Price
The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis.