Search Results for author: Shashank Singh

Found 23 papers, 6 papers with code

Spuriosity Didn't Kill the Classifier: Using Invariant Predictions to Harness Spurious Features

no code implementations19 Jul 2023 Cian Eastwood, Shashank Singh, Andrei Liviu Nicolicioiu, Marin Vlastelica, Julius von Kügelgen, Bernhard Schölkopf

To avoid failures on out-of-distribution data, recent works have sought to extract features that have an invariant or stable relationship with the label across domains, discarding "spurious" or unstable features whose relationship with the label changes across domains.

Decoding Attention from Gaze: A Benchmark Dataset and End-to-End Models

1 code implementation20 Nov 2022 Karan Uppal, Jaeah Kim, Shashank Singh

Eye-tracking has potential to provide rich behavioral data about human cognition in ecologically valid environments.

valid

Probable Domain Generalization via Quantile Risk Minimization

2 code implementations20 Jul 2022 Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed Hassani, George J. Pappas, Bernhard Schölkopf

By minimizing the $\alpha$-quantile of predictor's risk distribution over domains, QRM seeks predictors that perform well with probability $\alpha$.

Domain Generalization

Indirect Active Learning

no code implementations3 Jun 2022 Shashank Singh

Traditional models of active learning assume a learner can directly manipulate or query a covariate $X$ in order to study its relationship with a response $Y$.

Active Learning

Optimal Binary Classification Beyond Accuracy

1 code implementation5 Jul 2021 Shashank Singh, Justin Khim

The vast majority of statistical theory on binary classification characterizes performance in terms of accuracy.

Binary Classification Classification +1

Continuum-Armed Bandits: A Function Space Perspective

no code implementations15 Oct 2020 Shashank Singh

In both noiseless and noisy conditions, we derive minimax rates under simple and cumulative regrets.

Robust Density Estimation under Besov IPM Losses

no code implementations NeurIPS 2020 Ananya Uppal, Shashank Singh, Barnabas Poczos

We study minimax convergence rates of nonparametric density estimation in the Huber contamination model, in which a proportion of the data comes from an unknown outlier distribution.

Density Estimation

Multiclass Classification via Class-Weighted Nearest Neighbors

1 code implementation9 Apr 2020 Justin Khim, Ziyu Xu, Shashank Singh

We study statistical properties of the k-nearest neighbors algorithm for multiclass classification, with a focus on settings where the number of classes may be large and/or classes may be highly imbalanced.

Classification General Classification

Differentiable Architecture Compression

no code implementations ICLR 2020 Shashank Singh, Ashish Khetan, Zohar Karnin

In many learning situations, resources at inference time are significantly more constrained than resources at training time.

Image Classification Model Compression

DARC: Differentiable ARchitecture Compression

no code implementations20 May 2019 Shashank Singh, Ashish Khetan, Zohar Karnin

In many learning situations, resources at inference time are significantly more constrained than resources at training time.

Image Classification Model Compression +1

Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses

no code implementations NeurIPS 2019 Ananya Uppal, Shashank Singh, Barnabás Póczos

Thus, we show how our results imply bounds on the statistical error of a GAN, showing, for example, that GANs can strictly outperform the best linear estimator.

Density Estimation

Nonparametric Density Estimation under Adversarial Losses

no code implementations NeurIPS 2018 Shashank Singh, Ananya Uppal, Boyue Li, Chun-Liang Li, Manzil Zaheer, Barnabás Póczos

We study minimax convergence rates of nonparametric density estimation under a large class of loss functions called "adversarial losses", which, besides classical $\mathcal{L}^p$ losses, includes maximum mean discrepancy (MMD), Wasserstein distance, and total variation distance.

Density Estimation

Minimax Estimation of Quadratic Fourier Functionals

no code implementations30 Mar 2018 Shashank Singh, Bharath K. Sriperumbudur, Barnabás Póczos

We study estimation of (semi-)inner products between two nonparametric probability distributions, given IID samples from each distribution.

Translation

Minimax Distribution Estimation in Wasserstein Distance

no code implementations24 Feb 2018 Shashank Singh, Barnabás Póczos

The Wasserstein metric is an important measure of distance between probability distributions, with applications in machine learning, statistics, probability theory, and data analysis.

BIG-bench Machine Learning

On the Reconstruction Risk of Convolutional Sparse Dictionary Learning

1 code implementation29 Aug 2017 Shashank Singh, Barnabás Póczos, Jian Ma

Sparse dictionary learning (SDL) has become a popular method for adaptively identifying parsimonious representations of a dataset, a fundamental problem in machine learning and signal processing.

Dictionary Learning Time Series +1

Nonparanormal Information Estimation

no code implementations ICML 2017 Shashank Singh, Barnabás Pøczos

To address this, we propose estimators for mutual information when $p$ is assumed to be a nonparanormal (a. k. a., Gaussian copula) model, a semiparametric compromise between Gaussian and nonparametric extremes.

Mutual Information Estimation

Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators

no code implementations NeurIPS 2016 Shashank Singh, Barnabás Póczos

We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences.

Efficient Nonparametric Smoothness Estimation

1 code implementation NeurIPS 2016 Shashank Singh, Simon S. Du, Barnabás Póczos

Sobolev quantities (norms, inner products, and distances) of probability density functions are important in the theory of nonparametric statistics, but have rarely been used in practice, partly due to a lack of practical estimators.

Two-sample testing

Distributed Gradient Descent in Bacterial Food Search

no code implementations11 Apr 2016 Shashank Singh, Sabrina Rashid, Zhicheng Long, Saket Navlakha, Hanna Salman, Zoltan N. Oltvai, Ziv Bar-Joseph

Communication and coordination play a major role in the ability of bacterial cells to adapt to ever changing environments and conditions.

Quantitative Methods

Analysis of k-Nearest Neighbor Distances with Application to Entropy Estimation

no code implementations28 Mar 2016 Shashank Singh, Barnabás Póczos

Estimating entropy and mutual information consistently is important for many machine learning applications.

Exponential Concentration of a Density Functional Estimator

no code implementations NeurIPS 2014 Shashank Singh, Barnabás P óczos

We analyze a plug-in estimator for a large class of integral functionals of one or more continuous probability densities.

Generalized Exponential Concentration Inequality for Rényi Divergence Estimation

no code implementations28 Mar 2016 Shashank Singh, Barnabás Póczos

Estimating divergences in a consistent way is of great importance in many machine learning tasks.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.