Search Results for author: Sushrut Karmalkar

Found 21 papers, 3 papers with code

Robust Sparse Estimation for Gaussians with Optimal Error under Huber Contamination

no code implementations15 Mar 2024 Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas

Concretely, for Gaussian robust $k$-sparse mean estimation on $\mathbb{R}^d$ with corruption rate $\epsilon>0$, our algorithm has sample complexity $(k^2/\epsilon^2)\mathrm{polylog}(d/\epsilon)$, runs in sample polynomial time, and approximates the target mean within $\ell_2$-error $O(\epsilon)$.

Multi-Model 3D Registration: Finding Multiple Moving Objects in Cluttered Point Clouds

no code implementations16 Feb 2024 David Jin, Sushrut Karmalkar, Harry Zhang, Luca Carlone

We investigate a variation of the 3D registration problem, named multi-model 3D registration.

Distribution-Independent Regression for Generalized Linear Models with Oblivious Corruptions

no code implementations20 Sep 2023 Ilias Diakonikolas, Sushrut Karmalkar, Jongho Park, Christos Tzamos

Our goal is to accurately recover a \new{parameter vector $w$ such that the} function $g(w \cdot x)$ \new{has} arbitrarily small error when compared to the true values $g(w^* \cdot x)$, rather than the noisy measurements $y$.

regression

Robust Sparse Mean Estimation via Sum of Squares

no code implementations7 Jun 2022 Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas

In this work, we develop the first efficient algorithms for robust sparse mean estimation without a priori knowledge of the covariance.

Fairness for Image Generation with Uncertain Sensitive Attributes

1 code implementation23 Jun 2021 Ajil Jalal, Sushrut Karmalkar, Jessica Hoffmann, Alexandros G. Dimakis, Eric Price

This motivates the introduction of definitions that allow algorithms to be \emph{oblivious} to the relevant groupings.

Fairness Image Generation +3

Instance-Optimal Compressed Sensing via Posterior Sampling

1 code implementation21 Jun 2021 Ajil Jalal, Sushrut Karmalkar, Alexandros G. Dimakis, Eric Price

We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors).

Compressed Sensing with Approximate Priors via Conditional Resampling

no code implementations23 Oct 2020 Ajil Jalal, Sushrut Karmalkar, Alex Dimakis, Eric Price

We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors).

The Polynomial Method is Universal for Distribution-Free Correlational SQ Learning

no code implementations22 Oct 2020 Aravind Gollakota, Sushrut Karmalkar, Adam Klivans

Generalizing a beautiful work of Malach and Shalev-Shwartz (2022) that gave tight correlational SQ (CSQ) lower bounds for learning DNF formulas, we give new proofs that lower bounds on the threshold or approximate degree of any function class directly imply CSQ lower bounds for PAC or agnostic learning respectively.

Approximation Schemes for ReLU Regression

no code implementations26 May 2020 Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam R. Klivans, Mahdi Soltanolkotabi

We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution.

regression

Robustly Learning any Clusterable Mixture of Gaussians

no code implementations13 May 2020 Ilias Diakonikolas, Samuel B. Hopkins, Daniel Kane, Sushrut Karmalkar

The key ingredients of this proof are a novel use of SoS-certifiable anti-concentration and a new characterization of pairs of Gaussians with small (dimension-independent) overlap in terms of their parameter distance.

Clustering

List-Decodable Linear Regression

no code implementations NeurIPS 2019 Sushrut Karmalkar, Adam R. Klivans, Pravesh K. Kothari

To complement our result, we prove that the anti-concentration assumption on the inliers is information-theoretically necessary.

regression

Compressed Sensing with Adversarial Sparse Noise via L1 Regression

no code implementations21 Sep 2018 Sushrut Karmalkar, Eric Price

We present a simple and effective algorithm for the problem of \emph{sparse robust linear regression}.

regression

Depth separation and weight-width trade-offs for sigmoidal neural networks

no code implementations ICLR 2018 Amit Deshpande, Navin Goyal, Sushrut Karmalkar

We show a similar separation between the expressive power of depth-2 and depth-3 sigmoidal neural networks over a large class of input distributions, as long as the weights are polynomially bounded.

Robust polynomial regression up to the information theoretic limit

no code implementations10 Aug 2017 Daniel Kane, Sushrut Karmalkar, Eric Price

We consider the problem of robust polynomial regression, where one receives samples $(x_i, y_i)$ that are usually within $\sigma$ of a polynomial $y = p(x)$, but have a $\rho$ chance of being arbitrary adversarial outliers.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.