no code implementations • ICML 2020 • Liu Leqi, Justin Khim, Adarsh Prasad, Pradeep Ravikumar
In this work, we study a novel notion of L-Risk based on the classical idea of rank-weighted learning.
no code implementations • ICML 2020 • Chen Dan, Yuting Wei, Pradeep Ravikumar
In this paper, we provide the first result of the \emph{optimal} minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by \cite{schmidt2018adversarially}.
no code implementations • ICML 2020 • Liu Leqi, Justin Khim, Adarsh Prasad, Pradeep Ravikumar
In this work, we study a novel notion of L-Risk based on the classical idea of rank-weighted learning.
no code implementations • 30 Jun 2023 • Tianyu Chen, Kevin Bello, Bryon Aragam, Pradeep Ravikumar
This paper focuses on identifying $\textit{functional}$ mechanism shifts in two or more related SCMs over the same set of variables -- $\textit{without estimating the entire DAG structure of each SCM}$.
no code implementations • 30 Jun 2023 • Chang Deng, Kevin Bello, Bryon Aragam, Pradeep Ravikumar
Recently, a new class of non-convex optimization problems motivated by the statistical problem of learning an acyclic directed graphical model from data has attracted significant interest.
no code implementations • 4 Jun 2023 • Simon Buchholz, Goutham Rajendran, Elan Rosenfeld, Bryon Aragam, Bernhard Schölkopf, Pradeep Ravikumar
We study the problem of learning causal representations from unknown, latent interventions in a general setting, where the latent distribution is Gaussian but the mixing function is completely general.
no code implementations • 1 Jun 2023 • Runtian Zhai, Bingbin Liu, Andrej Risteski, Zico Kolter, Pradeep Ravikumar
Our first main theorem provides, for an arbitrary encoder, near tight bounds for both the estimation error incurred by fitting the linear probe on top of the encoder, and the approximation error entailed by the fitness of the RKHS the encoder learns.
no code implementations • 31 May 2023 • Che-Ping Tsai, Jiong Zhang, Eli Chien, Hsiang-Fu Yu, Cho-Jui Hsieh, Pradeep Ravikumar
We introduce a novel class of sample-based explanations we term high-dimensional representers, that can be used to explain the predictions of a regularized high-dimensional model in terms of importance weights for each of the training samples.
1 code implementation • 26 May 2023 • Chang Deng, Kevin Bello, Bryon Aragam, Pradeep Ravikumar
In this work, we delve into the optimization challenges associated with this class of non-convex programs.
no code implementations • 25 Mar 2023 • Rattana Pukdee, Dylan Sam, J. Zico Kolter, Maria-Florina Balcan, Pradeep Ravikumar
While supervised learning assumes the presence of labeled data, we may have prior information about how models should behave.
no code implementations • 16 Feb 2023 • Wenbin Zhang, Juyong Kim, Zichong Wang, Pradeep Ravikumar, Jeremy Weiss
Algorithmic fairness, studying how to make machine learning (ML) algorithms fair, is an established area of ML.
no code implementations • 23 Oct 2022 • Maria-Florina Balcan, Rattana Pukdee, Pradeep Ravikumar, Hongyang Zhang
Adversarial training is a standard technique for training adversarially robust models.
1 code implementation • 7 Oct 2022 • Rattana Pukdee, Dylan Sam, Maria-Florina Balcan, Pradeep Ravikumar
Semi-supervised learning and weakly supervised learning are important paradigms that aim to reduce the growing demand for labeled data in current machine learning applications.
2 code implementations • 16 Sep 2022 • Kevin Bello, Bryon Aragam, Pradeep Ravikumar
From the optimization side, we drop the typically used augmented Lagrangian scheme and propose DAGMA ($\textit{DAGs via M-matrices for Acyclicity}$), a method that resembles the central path for barrier methods.
no code implementations • 31 Aug 2022 • Andrew Bai, Chih-Kuan Yeh, Pradeep Ravikumar, Neil Y. C. Lin, Cho-Jui Hsieh
We showed that for a general (potentially non-linear) concept, we can mathematically evaluate how a small change of concept affecting the model's prediction, which leads to an extension of gradient-based interpretation to the concept space.
no code implementations • 20 Jun 2022 • Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar, Bryon Aragam
We prove identifiability of a broad class of deep latent variable models that (a) have universal approximation capabilities and (b) are the decoders of variational autoencoders that are commonly used in practice.
1 code implementation • 7 Jun 2022 • Dinghuai Zhang, Hongyang Zhang, Aaron Courville, Yoshua Bengio, Pradeep Ravikumar, Arun Sai Suggala
Consequently, an emerging line of work has focused on learning an ensemble of neural networks to defend against adversarial attacks.
1 code implementation • 2 Mar 2022 • Che-Ping Tsai, Chih-Kuan Yeh, Pradeep Ravikumar
We show that by additionally requiring the faithful interaction indices to satisfy interaction-extensions of the standard individual Shapley axioms (dummy, symmetry, linearity, and efficiency), we obtain a unique Faithful Shapley Interaction index, which we denote Faith-Shap, as a natural generalization of the Shapley value to interactions.
no code implementations • 25 Feb 2022 • Chih-Kuan Yeh, Been Kim, Pradeep Ravikumar
We start by introducing concept explanations including the class of Concept Activation Vectors (CAV) which characterize concepts using vectors in appropriate spaces of neural activations, and discuss different properties of useful concepts, and approaches to measure the usefulness of concept vectors.
1 code implementation • 24 Feb 2022 • Chih-Kuan Yeh, Ankur Taly, Mukund Sundararajan, Frederick Liu, Pradeep Ravikumar
However, we observe that since the activation connected to the last layer of weights contains "shared logic", the data influenced calculated via the last layer weights prone to a ``cancellation effect'', where the data influence of different examples have large magnitude that contradicts each other.
no code implementations • 24 Feb 2022 • Chih-Kuan Yeh, Kuan-Yun Lee, Frederick Liu, Pradeep Ravikumar
We formalize the desiderata of value functions that respect both the model and the data manifold in a set of axioms and are robust to perturbation on off-manifold regions, and show that there exists a unique value function that satisfies these axioms, which we term the Joint Baseline value function, and the resulting Shapley value the Joint Baseline Shapley (JBshap), and validate the effectiveness of JBshap in experiments.
Explainable Artificial Intelligence (XAI)
Feature Importance
no code implementations • 18 Feb 2022 • Bingbin Liu, Daniel Hsu, Pradeep Ravikumar, Andrej Risteski
This lens is undoubtedly very interesting, but suffers from the problem that there isn't a "canonical" set of downstream tasks to focus on -- in practice, this problem is usually resolved by competing on the benchmark dataset du jour.
1 code implementation • 14 Feb 2022 • Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
Towards this end, we introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
1 code implementation • 28 Jan 2022 • Runtian Zhai, Chen Dan, Zico Kolter, Pradeep Ravikumar
Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization.
1 code implementation • NeurIPS 2021 • Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar
To learn such randomized classifiers, we propose the Boosted CVaR Classification framework which is motivated by a direct relationship between CVaR and a classical boosting algorithm called LPBoost.
no code implementations • ICLR 2022 • Bingbin Liu, Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
Noise-contrastive estimation (NCE) is a statistically consistent method for learning unnormalized probabilistic models.
1 code implementation • ICLR 2022 • So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, Ruslan Salakhutdinov
In contrast, we propose a modular method with structured representations that (1) builds a semantic map of the scene and (2) performs exploration with a semantic search policy, to achieve the natural language goal.
no code implementations • 25 Aug 2021 • Che-Ping Tsai, Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Ravikumar
We consider the task of heavy-tailed statistical estimation given streaming $p$-dimensional samples.
1 code implementation • NeurIPS 2021 • Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar, Bryon Aragam
We study the problem of reconstructing a causal graphical model from data in the presence of latent variables.
no code implementations • ACL 2021 • Juyong Kim, Pradeep Ravikumar, Joshua Ainslie, Santiago Ontañón
Compositional generalization is the ability to generalize systematically to a new data distribution by combining known components.
1 code implementation • 11 Jun 2021 • Runtian Zhai, Chen Dan, J. Zico Kolter, Pradeep Ravikumar
Many machine learning tasks involve subpopulation shift where the testing data distribution is a subpopulation of the training distribution.
no code implementations • 15 Apr 2021 • Zeyu Zhou, Ziyu Gong, Pradeep Ravikumar, David I. Inouye
Existing flow-based approaches estimate multiple flows independently, which is equivalent to learning multiple full generative models.
no code implementations • 3 Mar 2021 • Bingbin Liu, Pradeep Ravikumar, Andrej Risteski
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
no code implementations • 25 Feb 2021 • Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
A popular assumption for out-of-distribution generalization is that the training data comprises sub-datasets, each drawn from a distinct distribution; the goal is then to "interpolate" these distributions and "extrapolate" beyond them -- this objective is broadly known as domain generalization.
no code implementations • 20 Feb 2021 • Saurabh Garg, Joshua Zhanson, Emilio Parisotto, Adarsh Prasad, J. Zico Kolter, Zachary C. Lipton, Sivaraman Balakrishnan, Ruslan Salakhutdinov, Pradeep Ravikumar
In this paper, we present a detailed empirical study to characterize the heavy-tailed nature of the gradients of the PPO surrogate reward function.
no code implementations • NeurIPS 2021 • Dhruv Malik, Yuanzhi Li, Pradeep Ravikumar
Agents trained by reinforcement learning (RL) often fail to generalize beyond the environment they were trained in, even when presented with new scenarios that seem similar to the training environment.
no code implementations • 19 Dec 2020 • Han Zhao, Chen Dan, Bryon Aragam, Tommi S. Jaakkola, Geoffrey J. Gordon, Pradeep Ravikumar
A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning invariant representations of the data that aim to achieve two competing goals: (a) maximize information or accuracy with respect to a target response, and (b) maximize invariance or independence with respect to a set of protected features (e. g., for fairness, privacy, etc).
no code implementations • NeurIPS 2020 • Adarsh Prasad, Vishwak Srinivasan, Sivaraman Balakrishnan, Pradeep Ravikumar
We study the problem of learning Ising models in a setting where some of the samples from the underlying distribution can be arbitrarily corrupted.
no code implementations • NeurIPS 2020 • Arun Suggala, Bingbin Liu, Pradeep Ravikumar
Using thorough empirical evaluation, we show that our learning algorithms have superior performance over traditional additive boosting algorithms, as well as existing greedy learning techniques for DNNs.
no code implementations • ICLR 2021 • Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
We furthermore present the very first results in the non-linear regime: we demonstrate that IRM can fail catastrophically unless the test data are sufficiently similar to the training distribution--this is precisely the issue that it was intended to solve.
no code implementations • 29 Jun 2020 • Chen Dan, Yuting Wei, Pradeep Ravikumar
In this paper, we provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by \cite{schmidt2018adversarially}.
no code implementations • 19 Jun 2020 • Kartik Gupta, Arun Sai Suggala, Adarsh Prasad, Praneeth Netrapalli, Pradeep Ravikumar
We view the problem of designing minimax estimators as finding a mixed strategy Nash equilibrium of a zero-sum game.
no code implementations • 14 Jun 2020 • Sijie He, Xinyan Li, Timothy DelSole, Pradeep Ravikumar, Arindam Banerjee
Sub-seasonal climate forecasting (SSF) focuses on predicting key climate variables such as temperature and precipitation in the 2-week to 2-month time scales.
no code implementations • ICLR 2021 • Cheng-Yu Hsieh, Chih-Kuan Yeh, Xuanqing Liu, Pradeep Ravikumar, Seungyeon Kim, Sanjiv Kumar, Cho-Jui Hsieh
In this paper, we establish a novel set of evaluation criteria for such feature based explanations by robustness analysis.
1 code implementation • ICML 2020 • Ziyu Xu, Chen Dan, Justin Khim, Pradeep Ravikumar
We define a robust risk that minimizes risk over a set of weightings and show excess risk bounds for this problem.
1 code implementation • ICLR 2020 • Biswajit Paria, Chih-Kuan Yeh, Ian E. H. Yen, Ning Xu, Pradeep Ravikumar, Barnabás Póczos
Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification.
no code implementations • ICML 2020 • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier.
2 code implementations • ICLR 2020 • Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.
no code implementations • NeurIPS 2019 • Fan Yang, Liu Leqi, Yifan Wu, Zachary C. Lipton, Pradeep Ravikumar, William W. Cohen, Tom Mitchell
The ability to inferring latent psychological traits from human behavior is key to developing personalized human-interacting machine learning systems.
2 code implementations • 2 Dec 2019 • David I. Inouye, Liu Leqi, Joon Sik Kim, Bryon Aragam, Pradeep Ravikumar
To address these drawbacks, we formalize a method for automating the selection of interesting PDPs and extend PDPs beyond showing single features to show the model response along arbitrary directions, for example in raw feature space or a latent space arising from some generative model.
no code implementations • 30 Oct 2019 • Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep Ravikumar
We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.
2 code implementations • NeurIPS 2020 • Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Tomas Pfister, Pradeep Ravikumar
Next, we propose a concept discovery method that aims to infer a complete set of concepts that are additionally encouraged to be interpretable, which addresses the limitations of existing methods on concept explanations.
2 code implementations • 29 Sep 2019 • Xun Zheng, Chen Dan, Bryon Aragam, Pradeep Ravikumar, Eric P. Xing
We develop a framework for learning sparse nonparametric directed acyclic graphs (DAGs) from data.
no code implementations • 25 Sep 2019 • Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Pradeep Ravikumar, Tomas Pfister
Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts.
no code implementations • 25 Sep 2019 • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
This paper considers label-flipping attacks, a type of data poisoning attack where an adversary relabels a small number of examples in a training set in order to degrade the performance of the resulting classifier.
no code implementations • 1 Jul 2019 • Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Ravikumar
Building on this connection, we provide a simple variant of recent computationally-efficient algorithms for mean estimation in Huber's model, which given our connection entails that the same efficient sample-pruning based estimators is simultaneously robust to heavy-tailed noise and Huber contamination.
no code implementations • ICLR 2019 • Chih-Kuan Yeh, Ian E. H. Yen, Hong-You Chen, Chun-Pei Yang, Shou-De Lin, Pradeep Ravikumar
State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.
no code implementations • 19 Mar 2019 • Arun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, Prateek Jain
We provide a nearly linear time estimator which consistently estimates the true regression vector, even with $1-o(1)$ fraction of corruptions.
2 code implementations • 27 Jan 2019 • Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, Pradeep Ravikumar
We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.
no code implementations • 20 Jan 2019 • Umang Bhatt, Pradeep Ravikumar, Jose M. F. Moura
Current approaches for explaining machine learning models fall into two distinct classes: antecedent event influence and value attribution.
1 code implementation • NeurIPS 2018 • Chih-Kuan Yeh, Joon Sik Kim, Ian E. H. Yen, Pradeep Ravikumar
We propose to explain the predictions of a deep neural network, by pointing to the set of what we call representer points in the training set, for a given test point prediction.
1 code implementation • EMNLP 2018 • Lingfei Wu, Ian E. H. Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, Michael J. Witbrock
While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings.
no code implementations • 10 Oct 2018 • Sung-En Chang, Xun Zheng, Ian E. H. Yen, Pradeep Ravikumar, Rose Yu
Tensor decomposition has been extensively used as a tool for exploratory analysis.
no code implementations • NeurIPS 2018 • Chen Dan, Liu Leqi, Bryon Aragam, Pradeep Ravikumar, Eric P. Xing
We study the sample complexity of semi-supervised learning (SSL) and introduce new assumptions based on the mismatch between a mixture model learned from unlabeled data and the true mixture model induced by the (unknown) class conditional distributions.
no code implementations • ICML 2018 • Ian En-Hsu Yen, Satyen Kale, Felix Yu, Daniel Holtmann-Rice, Sanjiv Kumar, Pradeep Ravikumar
For problems with large output spaces, evaluation of the loss function and its gradient are expensive, typically taking linear time in the size of the output space.
1 code implementation • ICML 2018 • David Inouye, Pradeep Ravikumar
Unlike Gaussianization, our destructive transformation has the elegant property that the density function is equal to the absolute value of the Jacobian determinant.
no code implementations • 7 Jun 2018 • Arun Sai Suggala, Adarsh Prasad, Vaishnavh Nagarajan, Pradeep Ravikumar
Based on the modified definition, we show that there is no trade-off between adversarial and standard accuracies; there exist classifiers that are robust and achieve high standard accuracy.
no code implementations • ICML 2018 • Bowei Yan, Oluwasanmi Koyejo, Kai Zhong, Pradeep Ravikumar
Complex performance measures, beyond the popular measure of accuracy, are increasingly being used in the context of binary classification.
no code implementations • 26 May 2018 • Simon S. Du, Yining Wang, Sivaraman Balakrishnan, Pradeep Ravikumar, Aarti Singh
We first show that a simple local binning median step can effectively remove the adversary noise and this median estimator is minimax optimal up to absolute constants over the H\"{o}lder function class with smoothness parameters smaller than or equal to 1.
4 code implementations • NeurIPS 2018 • Xun Zheng, Bryon Aragam, Pradeep Ravikumar, Eric P. Xing
This is achieved by a novel characterization of acyclicity that is not only smooth but also exact.
no code implementations • 19 Feb 2018 • Adarsh Prasad, Arun Sai Suggala, Sivaraman Balakrishnan, Pradeep Ravikumar
We provide a new computationally-efficient class of estimators for risk minimization.
no code implementations • 14 Feb 2018 • Lingfei Wu, Ian En-Hsu Yen, Fangli Xu, Pradeep Ravikumar, Michael Witbrock
For many machine learning problem settings, particularly with structured inputs such as sequences or sets of objects, a distance measure between inputs can be specified more naturally than a feature representation.
no code implementations • 12 Feb 2018 • Bryon Aragam, Chen Dan, Eric P. Xing, Pradeep Ravikumar
Motivated by problems in data clustering, we establish general conditions under which families of nonparametric mixture models are identifiable, by introducing a novel framework involving clustering overfitted \emph{parametric} (i. e. misspecified) mixture models.
no code implementations • 20 Sep 2017 • Ritesh Noothigattu, Snehalkumar 'Neil' S. Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, Ariel D. Procaccia
We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice.
no code implementations • ICML 2017 • Qi Lei, Ian En-Hsu Yen, Chao-yuan Wu, Inderjit S. Dhillon, Pradeep Ravikumar
We consider the popular problem of sparse empirical risk minimization with linear predictors and a large number of both features and observations.
no code implementations • ICML 2017 • Ian En-Hsu Yen, Wei-Cheng Lee, Sung-En Chang, Arun Sai Suggala, Shou-De Lin, Pradeep Ravikumar
The latent feature model (LFM), proposed in (Griffiths \& Ghahramani, 2005), but possibly with earlier origins, is a generalization of a mixture model, where each instance is generated not from a single latent class but from a combination of latent features.
no code implementations • ICML 2017 • Arun Sai Suggala, Eunho Yang, Pradeep Ravikumar
While there have been some work on tractable approximations, these do not come with strong statistical guarantees, and moreover are relatively computationally expensive.
no code implementations • 23 Oct 2016 • Bowei Yan, Oluwasanmi Koyejo, Kai Zhong, Pradeep Ravikumar
The proposed framework is general, as it applies to both batch and online learning, and to both linear and non-linear models.
1 code implementation • 31 Aug 2016 • David I. Inouye, Eunho Yang, Genevera I. Allen, Pradeep Ravikumar
The Poisson distribution has been widely studied and used for modeling univariate count-valued data.
no code implementations • 5 Aug 2016 • Rashish Tandon, Si Si, Pradeep Ravikumar, Inderjit Dhillon
In this paper, we investigate a divide and conquer approach to Kernel Ridge Regression (KRR).
1 code implementation • 2 Jun 2016 • David I. Inouye, Pradeep Ravikumar, Inderjit S. Dhillon
As in the recent work with square root graphical (SQR) models [Inouye et al. 2016]---which was restricted to pairwise dependencies---we give the conditions of the parameters that are needed for normalization using the radial conditionals similar to the pairwise case [Inouye et al. 2016].
1 code implementation • ICML 2016 • Ian En-Hsu Yen, Xiangru Huang, Pradeep Ravikumar, Kai Zhong, Inderjit S. Dhillon
In this work, we show that a margin-maximizing loss with l1 penalty, in case of Extreme Classification, yields extremely sparse solution both in primal and in dual without sacrificing the expressive power of predictor.
no code implementations • 11 Mar 2016 • David I. Inouye, Pradeep Ravikumar, Inderjit S. Dhillon
With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix---a condition akin to the positive definiteness of the Gaussian covariance matrix.
no code implementations • 15 Sep 2015 • Suriya Gunasekar, Pradeep Ravikumar, Joydeep Ghosh
We consider the matrix completion problem of recovering a structured matrix from noisy and partial measurements.
1 code implementation • 19 May 2015 • Wesley Tansey, Oscar Hernan Madrid Padilla, Arun Sai Suggala, Pradeep Ravikumar
Specifically, VS-MRFs are the joint graphical model distributions where the node-conditional distributions belong to generic exponential families with general vector space domains.
no code implementations • 7 May 2015 • Nagarajan Natarajan, Oluwasanmi Koyejo, Pradeep Ravikumar, Inderjit S. Dhillon
We provide a general theoretical analysis of expected out-of-sample utility, also referred to as decision-theoretic classification, for non-decomposable binary classification metrics such as F-measure and Jaccard coefficient.
no code implementations • NeurIPS 2014 • Karthikeyan Shanmugam, Rashish Tandon, Alexandros G. Dimakis, Pradeep Ravikumar
We provide a general framework for computing lower-bounds on the sample complexity of recovering the underlying graphs of Ising models, given i. i. d samples.
no code implementations • 2 Nov 2014 • Eunho Yang, Pradeep Ravikumar, Genevera I. Allen, Yulia Baker, Ying-Wooi Wan, Zhandong Liu
"Mixed Data" comprising a large number of heterogeneous variables (e. g. count, binary, continuous, skewed continuous, among other data types) are prevalent in varied areas such as genomics and proteomics, imaging genetics, national security, social networking, and Internet advertising.
no code implementations • NeurIPS 2014 • Kai Zhong, Ian E. H. Yen, Inderjit S. Dhillon, Pradeep Ravikumar
We consider the class of optimization problems arising from computationally intensive L1-regularized M-estimators, where the function or gradient values are very expensive to compute.
no code implementations • NeurIPS 2011 • Cho-Jui Hsieh, Matyas A. Sustik, Inderjit S. Dhillon, Pradeep Ravikumar
The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples.
no code implementations • 17 Jan 2013 • Eunho Yang, Pradeep Ravikumar, Genevera I. Allen, Zhandong Liu
Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications.