Search Results for author: Eric Price

Found 38 papers, 10 papers with code

Finite-Sample Symmetric Mean Estimation with Fisher Information Rate

no code implementations28 Jun 2023 Shivam Gupta, Jasper C. H. Lee, Eric Price

The mean of an unknown variance-$\sigma^2$ distribution $f$ can be estimated from $n$ samples with variance $\frac{\sigma^2}{n}$ and nearly corresponding subgaussian rate.

Accelerated Video Annotation driven by Deep Detector and Tracker

1 code implementation19 Feb 2023 Eric Price, Aamir Ahmad

In this paper, we propose a new annotation method which leverages a combination of a learning-based detector (SSD) and a learning-based tracker (RE$^3$).

High-dimensional Location Estimation via Norm Concentration for Subgamma Vectors

no code implementations5 Feb 2023 Shivam Gupta, Jasper C. H. Lee, Eric Price

In location estimation, we are given $n$ samples from a known distribution $f$ shifted by an unknown translation $\lambda$, and want to estimate $\lambda$ as precisely as possible.

Vocal Bursts Intensity Prediction

Hardness and Algorithms for Robust and Sparse Optimization

no code implementations29 Jun 2022 Eric Price, Sandeep Silwal, Samson Zhou

We further show fine-grained hardness of robust regression through a reduction from the minimum-weight $k$-clique conjecture.


Sharp Constants in Uniformity Testing via the Huber Statistic

no code implementations21 Jun 2022 Shivam Gupta, Eric Price

It is known that the optimal sample complexity to distinguish the uniform distribution on $m$ elements from any $\epsilon$-far distribution with $1-\delta$ probability is $n = \Theta\left(\frac{\sqrt{m \log (1/\delta)}}{\epsilon^2} + \frac{\log (1/\delta)}{\epsilon^2}\right)$, which is achieved by the empirical TV tester.

Finite-Sample Maximum Likelihood Estimation of Location

no code implementations6 Jun 2022 Shivam Gupta, Jasper C. H. Lee, Eric Price, Paul Valiant

We consider 1-dimensional location estimation, where we estimate a parameter $\lambda$ from $n$ samples $\lambda + \eta_i$, with each $\eta_i$ drawn i. i. d.

Coresets for Data Discretization and Sine Wave Fitting

no code implementations6 Mar 2022 Alaa Maalouf, Murad Tukan, Eric Price, Daniel Kane, Dan Feldman

The goal (e. g., for anomaly detection) is to approximate the $n$ points received so far in $P$ by a single frequency $\sin$, e. g. $\min_{c\in C}cost(P, c)+\lambda(c)$, where $cost(P, c)=\sum_{i=1}^n \sin^2(\frac{2\pi}{N} p_ic)$, $C\subseteq [N]$ is a feasible set of solutions, and $\lambda$ is a given regularization function.

Anomaly Detection

AirPose: Multi-View Fusion Network for Aerial 3D Human Pose and Shape Estimation

1 code implementation20 Jan 2022 Nitin Saini, Elia Bonetto, Eric Price, Aamir Ahmad, Michael J. Black

In this letter, we present a novel markerless 3D human motion capture (MoCap) system for unstructured, outdoor environments that uses a team of autonomous unmanned aerial vehicles (UAVs) with on-board RGB cameras and computation.

3D human pose and shape estimation

Robust Compressed Sensing MR Imaging with Deep Generative Priors

no code implementations NeurIPS Workshop Deep_Invers 2021 Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alex Dimakis, Jonathan Tamir

The CSGM framework (Bora-Jalal-Price-Dimakis'17) has shown that deep generative priors can be powerful tools for solving inverse problems.

Robust Compressed Sensing MRI with Deep Generative Priors

2 code implementations NeurIPS 2021 Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G. Dimakis, Jonathan I. Tamir

The CSGM framework (Bora-Jalal-Price-Dimakis'17) has shown that deep generative priors can be powerful tools for solving inverse problems.

Fairness for Image Generation with Uncertain Sensitive Attributes

1 code implementation23 Jun 2021 Ajil Jalal, Sushrut Karmalkar, Jessica Hoffmann, Alexandros G. Dimakis, Eric Price

This motivates the introduction of definitions that allow algorithms to be \emph{oblivious} to the relevant groupings.

Fairness Image Generation +2

Instance-Optimal Compressed Sensing via Posterior Sampling

1 code implementation21 Jun 2021 Ajil Jalal, Sushrut Karmalkar, Alexandros G. Dimakis, Eric Price

We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors).

L1 Regression with Lewis Weights Subsampling

no code implementations19 May 2021 Aditya Parulekar, Advait Parulekar, Eric Price

We consider the problem of finding an approximate solution to $\ell_1$ regression while only observing a small number of labels.


Linear Bandit Algorithms with Sublinear Time Complexity

no code implementations3 Mar 2021 Shuo Yang, Tongzheng Ren, Sanjay Shakkottai, Eric Price, Inderjit S. Dhillon, Sujay Sanghavi

For sufficiently large $K$, our algorithms have sublinear per-step complexity and $\tilde O(\sqrt{T})$ regret.

Movie Recommendation

Near-Optimal Learning of Tree-Structured Distributions by Chow-Liu

no code implementations9 Nov 2020 Arnab Bhattacharyya, Sutanu Gayen, Eric Price, N. V. Vinodchandran

For a distribution $P$ on $\Sigma^n$ and a tree $T$ on $n$ nodes, we say $T$ is an $\varepsilon$-approximate tree for $P$ if there is a $T$-structured distribution $Q$ such that $D(P\;||\;Q)$ is at most $\varepsilon$ more than the best possible tree-structured distribution for $P$.

Compressed Sensing with Approximate Priors via Conditional Resampling

no code implementations23 Oct 2020 Ajil Jalal, Sushrut Karmalkar, Alex Dimakis, Eric Price

We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors).

Optimal Testing of Discrete Distributions with High Probability

no code implementations14 Sep 2020 Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, John Peebles, Eric Price

To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples.

Vocal Bursts Intensity Prediction

Adversarial Examples from Cryptographic Pseudo-Random Generators

no code implementations15 Nov 2018 Sébastien Bubeck, Yin Tat Lee, Eric Price, Ilya Razenshteyn

In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805. 10204) we argued that adversarial examples in machine learning might be due to an inherent computational hardness of the problem.

Binary Classification General Classification

Compressed Sensing with Adversarial Sparse Noise via L1 Regression

no code implementations21 Sep 2018 Sushrut Karmalkar, Eric Price

We present a simple and effective algorithm for the problem of \emph{sparse robust linear regression}.


Compressed Sensing with Deep Image Prior and Learned Regularization

1 code implementation17 Jun 2018 Dave Van Veen, Ajil Jalal, Mahdi Soltanolkotabi, Eric Price, Sriram Vishwanath, Alexandros G. Dimakis

We propose a novel method for compressed sensing recovery using untrained deep generative models.

Adversarial examples from computational constraints

no code implementations25 May 2018 Sébastien Bubeck, Eric Price, Ilya Razenshteyn

First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples.

Binary Classification Classification +1

AmbientGAN: Generative models from lossy measurements

no code implementations ICLR 2018 Ashish Bora, Eric Price, Alexandros G. Dimakis

Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest.

Stochastic Multi-armed Bandits in Constant Space

no code implementations25 Dec 2017 David Liau, Eric Price, Zhao Song, Ger Yang

We consider the stochastic bandit problem in the sublinear space setting, where one cannot record the win-loss record for all $K$ arms.

Multi-Armed Bandits

Active Regression via Linear-Sample Sparsification

no code implementations27 Nov 2017 Xue Chen, Eric Price

We present an approach that improves the sample complexity for a variety of curve fitting problems, including active learning for linear regression, polynomial regression, and continuous sparse Fourier transforms.

Active Learning regression

Robust polynomial regression up to the information theoretic limit

no code implementations10 Aug 2017 Daniel Kane, Sushrut Karmalkar, Eric Price

We consider the problem of robust polynomial regression, where one receives samples $(x_i, y_i)$ that are usually within $\sigma$ of a polynomial $y = p(x)$, but have a $\rho$ chance of being arbitrary adversarial outliers.


Optimal Identity Testing with High Probability

no code implementations9 Aug 2017 Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price

Our new upper and lower bounds show that the optimal sample complexity of identity testing is \[ \Theta\left( \frac{1}{\epsilon^2}\left(\sqrt{n \log(1/\delta)} + \log(1/\delta) \right)\right) \] for any $n, \varepsilon$, and $\delta$.

Vocal Bursts Intensity Prediction

Fast Regression with an $\ell_\infty$ Guarantee

no code implementations30 May 2017 Eric Price, Zhao Song, David P. Woodruff

Our main result is that, when $S$ is the subsampled randomized Fourier/Hadamard transform, the error $x' - x^*$ behaves as if it lies in a "random" direction within this bound: for any fixed direction $a\in \mathbb{R}^d$, we have with $1 - d^{-c}$ probability that \[ \langle a, x'-x^*\rangle \lesssim \frac{\|a\|_2\|x'-x^*\|_2}{d^{\frac{1}{2}-\gamma}}, \quad (1) \] where $c, \gamma > 0$ are arbitrary constants.


Compressed Sensing using Generative Models

3 code implementations ICML 2017 Ashish Bora, Ajil Jalal, Eric Price, Alexandros G. Dimakis

The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain.

Collision-based Testers are Optimal for Uniformity and Closeness

no code implementations11 Nov 2016 Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price

We study the fundamental problems of (i) uniformity testing of a discrete distribution, and (ii) closeness testing between two discrete distributions with bounded $\ell_2$-norm.

Extensions and Limitations of the Neural GPU

1 code implementation2 Nov 2016 Eric Price, Wojciech Zaremba, Ilya Sutskever

We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before).

Equality of Opportunity in Supervised Learning

7 code implementations NeurIPS 2016 Moritz Hardt, Eric Price, Nathan Srebro

We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features.

General Classification

Binary Embedding: Fundamental Limits and Fast Algorithm

no code implementations19 Feb 2015 Xinyang Yi, Constantine Caramanis, Eric Price

Binary embedding is a nonlinear dimension reduction methodology where high dimensional data are embedded into the Hamming cube while preserving the structure of the original space.

Data Structures and Algorithms Information Theory Information Theory

Tight bounds for learning a mixture of two gaussians

no code implementations19 Apr 2014 Moritz Hardt, Eric Price

Denoting by $\sigma^2$ the variance of the unknown mixture, we prove that $\Theta(\sigma^{12})$ samples are necessary and sufficient to estimate each parameter up to constant additive error when $d=1.$ Our upper bound extends to arbitrary dimension $d>1$ up to a (provably necessary) logarithmic loss in $d$ using a novel---yet simple---dimensionality reduction technique.

Dimensionality Reduction Vocal Bursts Valence Prediction

The Noisy Power Method: A Meta Algorithm with Applications

no code implementations NeurIPS 2014 Moritz Hardt, Eric Price

The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis.

Matrix Completion Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.