Search Results for author: Pradeep K. Ravikumar

Found 38 papers, 2 papers with code

On Human-Aligned Risk Minimization

no code implementations NeurIPS 2019 Liu Leqi, Adarsh Prasad, Pradeep K. Ravikumar

The statistical decision theoretic foundations of modern machine learning have largely focused on the minimization of the expectation of some loss function for a given task.

Decision Making Fairness

On the (In)fidelity and Sensitivity of Explanations

1 code implementation NeurIPS 2019 Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar

We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.

Optimal Analysis of Subset-Selection Based L_p Low-Rank Approximation

no code implementations NeurIPS 2019 Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep K. Ravikumar

We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.

The Sample Complexity of Semi-Supervised Learning with Nonparametric Mixture Models

no code implementations NeurIPS 2018 Chen Dan, Liu Leqi, Bryon Aragam, Pradeep K. Ravikumar, Eric P. Xing

We study the sample complexity of semi-supervised learning (SSL) and introduce new assumptions based on the mismatch between a mixture model learned from unlabeled data and the true mixture model induced by the (unknown) class conditional distributions.

Binary Classification Classification +2

Connecting Optimization and Regularization Paths

no code implementations NeurIPS 2018 Arun Suggala, Adarsh Prasad, Pradeep K. Ravikumar

We study the implicit regularization properties of optimization techniques by explicitly connecting their optimization paths to the regularization paths of ``corresponding'' regularized problems.

The Expxorcist: Nonparametric Graphical Models Via Conditional Exponential Densities

no code implementations NeurIPS 2017 Arun Suggala, Mladen Kolar, Pradeep K. Ravikumar

Non-parametric multivariate density estimation faces strong statistical and computational bottlenecks, and the more practical approaches impose near-parametric assumptions on the form of the density functions.

Density Estimation

On Separability of Loss Functions, and Revisiting Discriminative Vs Generative Models

no code implementations NeurIPS 2017 Adarsh Prasad, Alexandru Niculescu-Mizil, Pradeep K. Ravikumar

We revisit the classical analysis of generative vs discriminative models for general exponential families, and high-dimensional settings.

Dual Decomposed Learning with Factorwise Oracle for Structural SVM of Large Output Domain

no code implementations NeurIPS 2016 Ian En-Hsu Yen, Xiangru Huang, Kai Zhong, Ruohan Zhang, Pradeep K. Ravikumar, Inderjit S. Dhillon

In this work, we show that, by decomposing training of Structural Support Vector Machine (SVM) into a series of multiclass SVM problems connected through messages, one can replace expensive structured oracle with Factorwise Maximization Oracle (FMO) that allows efficient implementation of complexity sublinear to the factor domain.

Consistent Multilabel Classification

no code implementations NeurIPS 2015 Oluwasanmi O. Koyejo, Nagarajan Natarajan, Pradeep K. Ravikumar, Inderjit S. Dhillon

In particular, we show that for multilabel metrics constructed as instance-, micro- and macro-averages, the population optimal classifier can be decomposed into binary classifiers based on the marginal instance-conditional distribution of each label, with a weak association between labels via the threshold.

Classification General Classification

Fast Classification Rates for High-dimensional Gaussian Generative Models

no code implementations NeurIPS 2015 Tianyang Li, Adarsh Prasad, Pradeep K. Ravikumar

We consider the problem of binary classification when the covariates conditioned on the each of the response values follow multivariate Gaussian distributions.

Binary Classification Classification +3

Sparse Linear Programming via Primal and Dual Augmented Coordinate Descent

no code implementations NeurIPS 2015 Ian En-Hsu Yen, Kai Zhong, Cho-Jui Hsieh, Pradeep K. Ravikumar, Inderjit S. Dhillon

Over the past decades, Linear Programming (LP) has been widely used in different areas and considered as one of the mature technologies in numerical optimization.

Fixed-Length Poisson MRF: Adding Dependencies to the Multinomial

no code implementations NeurIPS 2015 David I. Inouye, Pradeep K. Ravikumar, Inderjit S. Dhillon

We show the effectiveness of our LPMRF distribution over Multinomial models by evaluating the test set perplexity on a dataset of abstracts and Wikipedia.

Topic Models

Beyond Sub-Gaussian Measurements: High-Dimensional Structured Estimation with Sub-Exponential Designs

no code implementations NeurIPS 2015 Vidyashankar Sivakumar, Arindam Banerjee, Pradeep K. Ravikumar

In contrast, for the sub-exponential setting, we show that the sample complexity and the estimation error will depend on the exponential width of the corresponding sets, and the analysis holds for any norm.

Vocal Bursts Intensity Prediction

Collaborative Filtering with Graph Information: Consistency and Scalable Methods

2 code implementations NeurIPS 2015 Nikhil Rao, Hsiang-Fu Yu, Pradeep K. Ravikumar, Inderjit S. Dhillon

Low rank matrix completion plays a fundamental role in collaborative filtering applications, the key idea being that the variables lie in a smaller subspace than the ambient space.

 Ranked #1 on Recommendation Systems on Flixster (using extra training data)

Collaborative Filtering Low-Rank Matrix Completion +1

QUIC & DIRTY: A Quadratic Approximation Approach for Dirty Statistical Models

no code implementations NeurIPS 2014 Cho-Jui Hsieh, Inderjit S. Dhillon, Pradeep K. Ravikumar, Stephen Becker, Peder A. Olsen

In this paper, we develop a family of algorithms for optimizing superposition-structured” or “dirty” statistical estimators for high-dimensional problems involving the minimization of the sum of a smooth loss function with a hybrid regularization.

Model Selection Multi-Task Learning +1

Capturing Semantically Meaningful Word Dependencies with an Admixture of Poisson MRFs

no code implementations NeurIPS 2014 David I. Inouye, Pradeep K. Ravikumar, Inderjit S. Dhillon

We develop a fast algorithm for the Admixture of Poisson MRFs (APM) topic model and propose a novel metric to directly evaluate this model.

Topic Models

A Representation Theory for Ranking Functions

no code implementations NeurIPS 2014 Harsh H. Pareek, Pradeep K. Ravikumar

This paper presents a representation theory for permutation-valued functions, which in their general form can also be called listwise ranking functions.

Learning-To-Rank

Constant Nullspace Strong Convexity and Fast Convergence of Proximal Methods under High-Dimensional Settings

no code implementations NeurIPS 2014 Ian En-Hsu Yen, Cho-Jui Hsieh, Pradeep K. Ravikumar, Inderjit S. Dhillon

State of the art statistical estimators for high-dimensional problems take the form of regularized, and hence non-smooth, convex programs.

Elementary Estimators for Graphical Models

no code implementations NeurIPS 2014 Eunho Yang, Aurelie C. Lozano, Pradeep K. Ravikumar

We propose a class of closed-form estimators for sparsity-structured graphical models, expressed as exponential family distributions, under high-dimensional settings.

Sparse Random Feature Algorithm as Coordinate Descent in Hilbert Space

no code implementations NeurIPS 2014 Ian En-Hsu Yen, Ting-Wei Lin, Shou-De Lin, Pradeep K. Ravikumar, Inderjit S. Dhillon

In this paper, we propose a Sparse Random Feature algorithm, which learns a sparse non-linear predictor by minimizing an $\ell_1$-regularized objective function over the Hilbert Space induced from kernel function.

Dirty Statistical Models

no code implementations NeurIPS 2013 Eunho Yang, Pradeep K. Ravikumar

We provide a unified framework for the high-dimensional analysis of “superposition-structured” or “dirty” statistical models: where the model parameters are a “superposition” of structurally constrained parameters.

regression

Conditional Random Fields via Univariate Exponential Families

no code implementations NeurIPS 2013 Eunho Yang, Pradeep K. Ravikumar, Genevera I. Allen, Zhandong Liu

We thus introduce a “novel subclass of CRFs”, derived by imposing node-wise conditional distributions of response variables conditioned on the rest of the responses and the covariates as arising from univariate exponential families.

BIG & QUIC: Sparse Inverse Covariance Estimation for a Million Variables

no code implementations NeurIPS 2013 Cho-Jui Hsieh, Matyas A. Sustik, Inderjit S. Dhillon, Pradeep K. Ravikumar, Russell Poldrack

The l1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix even under high-dimensional settings.

Clustering

Learning with Noisy Labels

no code implementations NeurIPS 2013 Nagarajan Natarajan, Inderjit S. Dhillon, Pradeep K. Ravikumar, Ambuj Tewari

In this paper, we theoretically study the problem of binary classification in the presence of random classification noise --- the learner, instead of seeing the true labels, sees labels that have independently been flipped with some small probability.

Binary Classification General Classification +1

On Poisson Graphical Models

no code implementations NeurIPS 2013 Eunho Yang, Pradeep K. Ravikumar, Genevera I. Allen, Zhandong Liu

Undirected graphical models, such as Gaussian graphical models, Ising, and multinomial/categorical graphical models, are widely used in a variety of applications for modeling distributions over a large number of variables.

valid

Large Scale Distributed Sparse Precision Estimation

no code implementations NeurIPS 2013 Huahua Wang, Arindam Banerjee, Cho-Jui Hsieh, Pradeep K. Ravikumar, Inderjit S. Dhillon

We consider the problem of sparse precision matrix estimation in high dimensions using the CLIME estimator, which has several desirable theoretical properties.

Graphical Models via Generalized Linear Models

no code implementations NeurIPS 2012 Eunho Yang, Genevera Allen, Zhandong Liu, Pradeep K. Ravikumar

Our models allow one to estimate networks for a wide class of exponential distributions, such as the Poisson, negative binomial, and exponential, by fitting penalized GLMs to select the neighborhood for each node.

On Learning Discrete Graphical Models using Greedy Methods

no code implementations NeurIPS 2011 Ali Jalali, Christopher C. Johnson, Pradeep K. Ravikumar

In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a high-dimensional setting.

Nearest Neighbor based Greedy Coordinate Descent

no code implementations NeurIPS 2011 Inderjit S. Dhillon, Pradeep K. Ravikumar, Ambuj Tewari

In particular, we investigate the greedy coordinate descent algorithm, and note that performing the greedy step efficiently weakens the costly dependence on the problem size provided the solution is sparse.

Greedy Algorithms for Structurally Constrained High Dimensional Problems

no code implementations NeurIPS 2011 Ambuj Tewari, Pradeep K. Ravikumar, Inderjit S. Dhillon

A hallmark of modern machine learning is its ability to deal with high dimensional problems by exploiting structural assumptions that limit the degrees of freedom in the underlying model.

Vocal Bursts Intensity Prediction

A Dirty Model for Multi-task Learning

no code implementations NeurIPS 2010 Ali Jalali, Sujay Sanghavi, Chao Ruan, Pradeep K. Ravikumar

However, these papers also caution that the performance of such block-regularized methods are very dependent on the {\em extent} to which the features are shared across tasks.

Multi-Task Learning regression

Information-theoretic lower bounds on the oracle complexity of convex optimization

no code implementations NeurIPS 2009 Alekh Agarwal, Martin J. Wainwright, Peter L. Bartlett, Pradeep K. Ravikumar

The extensive use of convex optimization in machine learning and statistics makes such an understanding critical to understand fundamental computational limits of learning and estimation.

BIG-bench Machine Learning

A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers

no code implementations NeurIPS 2009 Sahand Negahban, Bin Yu, Martin J. Wainwright, Pradeep K. Ravikumar

The estimation of high-dimensional parametric models requires imposing some structure on the models, for instance that they be sparse, or that matrix structured parameters have low rank.

Cannot find the paper you are looking for? You can Submit a new open access paper.