no code implementations • NeurIPS 2019 • Liu Leqi, Adarsh Prasad, Pradeep K. Ravikumar
The statistical decision theoretic foundations of modern machine learning have largely focused on the minimization of the expectation of some loss function for a given task.
1 code implementation • NeurIPS 2019 • Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar
We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.
no code implementations • NeurIPS 2019 • Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep K. Ravikumar
We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.
no code implementations • NeurIPS 2018 • Ian En-Hsu Yen, Wei-Cheng Lee, Kai Zhong, Sung-En Chang, Pradeep K. Ravikumar, Shou-De Lin
We consider a generalization of mixed regression where the response is an additive combination of several mixture components.
no code implementations • NeurIPS 2018 • Chen Dan, Liu Leqi, Bryon Aragam, Pradeep K. Ravikumar, Eric P. Xing
We study the sample complexity of semi-supervised learning (SSL) and introduce new assumptions based on the mismatch between a mixture model learned from unlabeled data and the true mixture model induced by the (unknown) class conditional distributions.
no code implementations • NeurIPS 2018 • Arun Suggala, Adarsh Prasad, Pradeep K. Ravikumar
We study the implicit regularization properties of optimization techniques by explicitly connecting their optimization paths to the regularization paths of ``corresponding'' regularized problems.
no code implementations • NeurIPS 2017 • Arun Suggala, Mladen Kolar, Pradeep K. Ravikumar
Non-parametric multivariate density estimation faces strong statistical and computational bottlenecks, and the more practical approaches impose near-parametric assumptions on the form of the density functions.
no code implementations • NeurIPS 2017 • Adarsh Prasad, Alexandru Niculescu-Mizil, Pradeep K. Ravikumar
We revisit the classical analysis of generative vs discriminative models for general exponential families, and high-dimensional settings.
no code implementations • NeurIPS 2016 • Ian En-Hsu Yen, Xiangru Huang, Kai Zhong, Ruohan Zhang, Pradeep K. Ravikumar, Inderjit S. Dhillon
In this work, we show that, by decomposing training of Structural Support Vector Machine (SVM) into a series of multiclass SVM problems connected through messages, one can replace expensive structured oracle with Factorwise Maximization Oracle (FMO) that allows efficient implementation of complexity sublinear to the factor domain.
no code implementations • NeurIPS 2015 • Oluwasanmi O. Koyejo, Nagarajan Natarajan, Pradeep K. Ravikumar, Inderjit S. Dhillon
In particular, we show that for multilabel metrics constructed as instance-, micro- and macro-averages, the population optimal classifier can be decomposed into binary classifiers based on the marginal instance-conditional distribution of each label, with a weak association between labels via the threshold.
no code implementations • NeurIPS 2015 • Tianyang Li, Adarsh Prasad, Pradeep K. Ravikumar
We consider the problem of binary classification when the covariates conditioned on the each of the response values follow multivariate Gaussian distributions.
no code implementations • NeurIPS 2015 • Ian En-Hsu Yen, Kai Zhong, Cho-Jui Hsieh, Pradeep K. Ravikumar, Inderjit S. Dhillon
Over the past decades, Linear Programming (LP) has been widely used in different areas and considered as one of the mature technologies in numerical optimization.
no code implementations • NeurIPS 2015 • David I. Inouye, Pradeep K. Ravikumar, Inderjit S. Dhillon
We show the effectiveness of our LPMRF distribution over Multinomial models by evaluating the test set perplexity on a dataset of abstracts and Wikipedia.
no code implementations • NeurIPS 2015 • Vidyashankar Sivakumar, Arindam Banerjee, Pradeep K. Ravikumar
In contrast, for the sub-exponential setting, we show that the sample complexity and the estimation error will depend on the exponential width of the corresponding sets, and the analysis holds for any norm.
no code implementations • NeurIPS 2015 • Eunho Yang, Aurelie C. Lozano, Pradeep K. Ravikumar
We propose a class of closed-form estimators for GLMs under high-dimensional sampling regimes.
2 code implementations • NeurIPS 2015 • Nikhil Rao, Hsiang-Fu Yu, Pradeep K. Ravikumar, Inderjit S. Dhillon
Low rank matrix completion plays a fundamental role in collaborative filtering applications, the key idea being that the variables lie in a smaller subspace than the ambient space.
Ranked #1 on Recommendation Systems on Flixster (using extra training data)
no code implementations • NeurIPS 2014 • Cho-Jui Hsieh, Inderjit S. Dhillon, Pradeep K. Ravikumar, Stephen Becker, Peder A. Olsen
In this paper, we develop a family of algorithms for optimizing superposition-structured” or “dirty” statistical estimators for high-dimensional problems involving the minimization of the sum of a smooth loss function with a hybrid regularization.
no code implementations • NeurIPS 2014 • David I. Inouye, Pradeep K. Ravikumar, Inderjit S. Dhillon
We develop a fast algorithm for the Admixture of Poisson MRFs (APM) topic model and propose a novel metric to directly evaluate this model.
no code implementations • NeurIPS 2014 • Harsh H. Pareek, Pradeep K. Ravikumar
This paper presents a representation theory for permutation-valued functions, which in their general form can also be called listwise ranking functions.
no code implementations • NeurIPS 2014 • Ian En-Hsu Yen, Cho-Jui Hsieh, Pradeep K. Ravikumar, Inderjit S. Dhillon
State of the art statistical estimators for high-dimensional problems take the form of regularized, and hence non-smooth, convex programs.
no code implementations • NeurIPS 2014 • Eunho Yang, Aurelie C. Lozano, Pradeep K. Ravikumar
We propose a class of closed-form estimators for sparsity-structured graphical models, expressed as exponential family distributions, under high-dimensional settings.
no code implementations • NeurIPS 2014 • Oluwasanmi O. Koyejo, Nagarajan Natarajan, Pradeep K. Ravikumar, Inderjit S. Dhillon
We consider a fairly large family of performance metrics given by ratios of linear combinations of the four fundamental population quantities.
no code implementations • NeurIPS 2014 • Ian En-Hsu Yen, Ting-Wei Lin, Shou-De Lin, Pradeep K. Ravikumar, Inderjit S. Dhillon
In this paper, we propose a Sparse Random Feature algorithm, which learns a sparse non-linear predictor by minimizing an $\ell_1$-regularized objective function over the Hilbert Space induced from kernel function.
no code implementations • NeurIPS 2013 • Eunho Yang, Pradeep K. Ravikumar
We provide a unified framework for the high-dimensional analysis of “superposition-structured” or “dirty” statistical models: where the model parameters are a “superposition” of structurally constrained parameters.
no code implementations • NeurIPS 2013 • Eunho Yang, Pradeep K. Ravikumar, Genevera I. Allen, Zhandong Liu
We thus introduce a “novel subclass of CRFs”, derived by imposing node-wise conditional distributions of response variables conditioned on the rest of the responses and the covariates as arising from univariate exponential families.
no code implementations • NeurIPS 2013 • Cho-Jui Hsieh, Matyas A. Sustik, Inderjit S. Dhillon, Pradeep K. Ravikumar, Russell Poldrack
The l1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix even under high-dimensional settings.
no code implementations • NeurIPS 2013 • Nagarajan Natarajan, Inderjit S. Dhillon, Pradeep K. Ravikumar, Ambuj Tewari
In this paper, we theoretically study the problem of binary classification in the presence of random classification noise --- the learner, instead of seeing the true labels, sees labels that have independently been flipped with some small probability.
no code implementations • NeurIPS 2013 • Eunho Yang, Pradeep K. Ravikumar, Genevera I. Allen, Zhandong Liu
Undirected graphical models, such as Gaussian graphical models, Ising, and multinomial/categorical graphical models, are widely used in a variety of applications for modeling distributions over a large number of variables.
no code implementations • NeurIPS 2013 • Huahua Wang, Arindam Banerjee, Cho-Jui Hsieh, Pradeep K. Ravikumar, Inderjit S. Dhillon
We consider the problem of sparse precision matrix estimation in high dimensions using the CLIME estimator, which has several desirable theoretical properties.
no code implementations • NeurIPS 2012 • Eunho Yang, Genevera Allen, Zhandong Liu, Pradeep K. Ravikumar
Our models allow one to estimate networks for a wide class of exponential distributions, such as the Poisson, negative binomial, and exponential, by fitting penalized GLMs to select the neighborhood for each node.
no code implementations • NeurIPS 2012 • Cho-Jui Hsieh, Arindam Banerjee, Inderjit S. Dhillon, Pradeep K. Ravikumar
We derive a bound on the distance of the approximate solution to the true solution.
no code implementations • NeurIPS 2011 • Ali Jalali, Christopher C. Johnson, Pradeep K. Ravikumar
In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a high-dimensional setting.
no code implementations • NeurIPS 2011 • Inderjit S. Dhillon, Pradeep K. Ravikumar, Ambuj Tewari
In particular, we investigate the greedy coordinate descent algorithm, and note that performing the greedy step efficiently weakens the costly dependence on the problem size provided the solution is sparse.
no code implementations • NeurIPS 2011 • Ambuj Tewari, Pradeep K. Ravikumar, Inderjit S. Dhillon
A hallmark of modern machine learning is its ability to deal with high dimensional problems by exploiting structural assumptions that limit the degrees of freedom in the underlying model.
no code implementations • NeurIPS 2010 • Ali Jalali, Sujay Sanghavi, Chao Ruan, Pradeep K. Ravikumar
However, these papers also caution that the performance of such block-regularized methods are very dependent on the {\em extent} to which the features are shared across tasks.
no code implementations • NeurIPS 2009 • Alekh Agarwal, Martin J. Wainwright, Peter L. Bartlett, Pradeep K. Ravikumar
The extensive use of convex optimization in machine learning and statistics makes such an understanding critical to understand fundamental computational limits of learning and estimation.
no code implementations • NeurIPS 2009 • Sahand Negahban, Bin Yu, Martin J. Wainwright, Pradeep K. Ravikumar
The estimation of high-dimensional parametric models requires imposing some structure on the models, for instance that they be sparse, or that matrix structured parameters have low rank.
no code implementations • NeurIPS 2008 • Vincent Q. Vu, Bin Yu, Thomas Naselaris, Kendrick Kay, Jack Gallant, Pradeep K. Ravikumar
We propose a novel hierarchical, nonlinear model that predicts brain activity in area V1 evoked by natural images.