Search Results for author: Pradeep Ravikumar

Found 91 papers, 27 papers with code

Optimal Statistical Guaratees for Adversarially Robust Gaussian Classification

no code implementations ICML 2020 Chen Dan, Yuting Wei, Pradeep Ravikumar

In this paper, we provide the first result of the \emph{optimal} minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by \cite{schmidt2018adversarially}.

Adversarial Robustness Classification +2

iSCAN: Identifying Causal Mechanism Shifts among Nonlinear Additive Noise Models

no code implementations30 Jun 2023 Tianyu Chen, Kevin Bello, Bryon Aragam, Pradeep Ravikumar

This paper focuses on identifying $\textit{functional}$ mechanism shifts in two or more related SCMs over the same set of variables -- $\textit{without estimating the entire DAG structure of each SCM}$.

Global Optimality in Bivariate Gradient-based DAG Learning

no code implementations30 Jun 2023 Chang Deng, Kevin Bello, Bryon Aragam, Pradeep Ravikumar

Recently, a new class of non-convex optimization problems motivated by the statistical problem of learning an acyclic directed graphical model from data has attracted significant interest.

Learning Linear Causal Representations from Interventions under General Nonlinear Mixing

no code implementations4 Jun 2023 Simon Buchholz, Goutham Rajendran, Elan Rosenfeld, Bryon Aragam, Bernhard Schölkopf, Pradeep Ravikumar

We study the problem of learning causal representations from unknown, latent interventions in a general setting, where the latent distribution is Gaussian but the mixing function is completely general.

Understanding Augmentation-based Self-Supervised Representation Learning via RKHS Approximation

no code implementations1 Jun 2023 Runtian Zhai, Bingbin Liu, Andrej Risteski, Zico Kolter, Pradeep Ravikumar

Our first main theorem provides, for an arbitrary encoder, near tight bounds for both the estimation error incurred by fitting the linear probe on top of the encoder, and the approximation error entailed by the fitness of the RKHS the encoder learns.

Contrastive Learning Data Augmentation +5

Representer Point Selection for Explaining Regularized High-dimensional Models

no code implementations31 May 2023 Che-Ping Tsai, Jiong Zhang, Eli Chien, Hsiang-Fu Yu, Cho-Jui Hsieh, Pradeep Ravikumar

We introduce a novel class of sample-based explanations we term high-dimensional representers, that can be used to explain the predictions of a regularized high-dimensional model in terms of importance weights for each of the training samples.

Binary Classification Collaborative Filtering +1

Optimizing NOTEARS Objectives via Topological Swaps

1 code implementation26 May 2023 Chang Deng, Kevin Bello, Bryon Aragam, Pradeep Ravikumar

In this work, we delve into the optimization challenges associated with this class of non-convex programs.

Learning with Explanation Constraints

no code implementations25 Mar 2023 Rattana Pukdee, Dylan Sam, J. Zico Kolter, Maria-Florina Balcan, Pradeep Ravikumar

While supervised learning assumes the presence of labeled data, we may have prior information about how models should behave.

Individual Fairness Guarantee in Learning with Censorship

no code implementations16 Feb 2023 Wenbin Zhang, Juyong Kim, Zichong Wang, Pradeep Ravikumar, Jeremy Weiss

Algorithmic fairness, studying how to make machine learning (ML) algorithms fair, is an established area of ML.


Label Propagation with Weak Supervision

1 code implementation7 Oct 2022 Rattana Pukdee, Dylan Sam, Maria-Florina Balcan, Pradeep Ravikumar

Semi-supervised learning and weakly supervised learning are important paradigms that aim to reduce the growing demand for labeled data in current machine learning applications.

Weakly Supervised Classification Weakly-supervised Learning

DAGMA: Learning DAGs via M-matrices and a Log-Determinant Acyclicity Characterization

2 code implementations16 Sep 2022 Kevin Bello, Bryon Aragam, Pradeep Ravikumar

From the optimization side, we drop the typically used augmented Lagrangian scheme and propose DAGMA ($\textit{DAGs via M-matrices for Acyclicity}$), a method that resembles the central path for barrier methods.

Causal Discovery

Concept Gradient: Concept-based Interpretation Without Linear Assumption

no code implementations31 Aug 2022 Andrew Bai, Chih-Kuan Yeh, Pradeep Ravikumar, Neil Y. C. Lin, Cho-Jui Hsieh

We showed that for a general (potentially non-linear) concept, we can mathematically evaluate how a small change of concept affecting the model's prediction, which leads to an extension of gradient-based interpretation to the concept space.

Identifiability of deep generative models without auxiliary information

no code implementations20 Jun 2022 Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar, Bryon Aragam

We prove identifiability of a broad class of deep latent variable models that (a) have universal approximation capabilities and (b) are the decoders of variational autoencoders that are commonly used in practice.

Building Robust Ensembles via Margin Boosting

1 code implementation7 Jun 2022 Dinghuai Zhang, Hongyang Zhang, Aaron Courville, Yoshua Bengio, Pradeep Ravikumar, Arun Sai Suggala

Consequently, an emerging line of work has focused on learning an ensemble of neural networks to defend against adversarial attacks.

Adversarial Robustness

Faith-Shap: The Faithful Shapley Interaction Index

1 code implementation2 Mar 2022 Che-Ping Tsai, Chih-Kuan Yeh, Pradeep Ravikumar

We show that by additionally requiring the faithful interaction indices to satisfy interaction-extensions of the standard individual Shapley axioms (dummy, symmetry, linearity, and efficiency), we obtain a unique Faithful Shapley Interaction index, which we denote Faith-Shap, as a natural generalization of the Shapley value to interactions.

Human-Centered Concept Explanations for Neural Networks

no code implementations25 Feb 2022 Chih-Kuan Yeh, Been Kim, Pradeep Ravikumar

We start by introducing concept explanations including the class of Concept Activation Vectors (CAV) which characterize concepts using vectors in appropriate spaces of neural activations, and discuss different properties of useful concepts, and approaches to measure the usefulness of concept vectors.

First is Better Than Last for Language Data Influence

1 code implementation24 Feb 2022 Chih-Kuan Yeh, Ankur Taly, Mukund Sundararajan, Frederick Liu, Pradeep Ravikumar

However, we observe that since the activation connected to the last layer of weights contains "shared logic", the data influenced calculated via the last layer weights prone to a ``cancellation effect'', where the data influence of different examples have large magnitude that contradicts each other.

Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations

no code implementations24 Feb 2022 Chih-Kuan Yeh, Kuan-Yun Lee, Frederick Liu, Pradeep Ravikumar

We formalize the desiderata of value functions that respect both the model and the data manifold in a set of axioms and are robust to perturbation on off-manifold regions, and show that there exists a unique value function that satisfies these axioms, which we term the Joint Baseline value function, and the resulting Shapley value the Joint Baseline Shapley (JBshap), and validate the effectiveness of JBshap in experiments.

Explainable Artificial Intelligence (XAI) Feature Importance

Masked prediction tasks: a parameter identifiability view

no code implementations18 Feb 2022 Bingbin Liu, Daniel Hsu, Pradeep Ravikumar, Andrej Risteski

This lens is undoubtedly very interesting, but suffers from the problem that there isn't a "canonical" set of downstream tasks to focus on -- in practice, this problem is usually resolved by competing on the benchmark dataset du jour.

Self-Supervised Learning

Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient for Out-of-Distribution Generalization

1 code implementation14 Feb 2022 Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski

Towards this end, we introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.

Domain Generalization Out-of-Distribution Generalization +1

Understanding Why Generalized Reweighting Does Not Improve Over ERM

1 code implementation28 Jan 2022 Runtian Zhai, Chen Dan, Zico Kolter, Pradeep Ravikumar

Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization.

Boosted CVaR Classification

1 code implementation NeurIPS 2021 Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar

To learn such randomized classifiers, we propose the Boosted CVaR Classification framework which is motivated by a direct relationship between CVaR and a classical boosting algorithm called LPBoost.

Classification Decision Making +1

Analyzing and Improving the Optimization Landscape of Noise-Contrastive Estimation

no code implementations ICLR 2022 Bingbin Liu, Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski

Noise-contrastive estimation (NCE) is a statistically consistent method for learning unnormalized probabilistic models.

FILM: Following Instructions in Language with Modular Methods

1 code implementation ICLR 2022 So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, Ruslan Salakhutdinov

In contrast, we propose a modular method with structured representations that (1) builds a semantic map of the scene and (2) performs exploration with a semantic search policy, to achieve the natural language goal.

Imitation Learning Instruction Following

Heavy-tailed Streaming Statistical Estimation

no code implementations25 Aug 2021 Che-Ping Tsai, Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Ravikumar

We consider the task of heavy-tailed statistical estimation given streaming $p$-dimensional samples.

regression Stochastic Optimization

Learning latent causal graphs via mixture oracles

1 code implementation NeurIPS 2021 Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar, Bryon Aragam

We study the problem of reconstructing a causal graphical model from data in the presence of latent variables.

Improving Compositional Generalization in Classification Tasks via Structure Annotations

no code implementations ACL 2021 Juyong Kim, Pradeep Ravikumar, Joshua Ainslie, Santiago Ontañón

Compositional generalization is the ability to generalize systematically to a new data distribution by combining known components.


DORO: Distributional and Outlier Robust Optimization

1 code implementation11 Jun 2021 Runtian Zhai, Chen Dan, J. Zico Kolter, Pradeep Ravikumar

Many machine learning tasks involve subpopulation shift where the testing data distribution is a subpopulation of the training distribution.

Open-Ended Question Answering

Iterative Alignment Flows

no code implementations15 Apr 2021 Zeyu Zhou, Ziyu Gong, Pradeep Ravikumar, David I. Inouye

Existing flow-based approaches estimate multiple flows independently, which is equivalent to learning multiple full generative models.

Unsupervised Domain Adaptation

Contrastive learning of strong-mixing continuous-time stochastic processes

no code implementations3 Mar 2021 Bingbin Liu, Pradeep Ravikumar, Andrej Risteski

Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.

Contrastive Learning Time Series +1

An Online Learning Approach to Interpolation and Extrapolation in Domain Generalization

no code implementations25 Feb 2021 Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski

A popular assumption for out-of-distribution generalization is that the training data comprises sub-datasets, each drawn from a distinct distribution; the goal is then to "interpolate" these distributions and "extrapolate" beyond them -- this objective is broadly known as domain generalization.

Domain Generalization Out-of-Distribution Generalization

On Proximal Policy Optimization's Heavy-tailed Gradients

no code implementations20 Feb 2021 Saurabh Garg, Joshua Zhanson, Emilio Parisotto, Adarsh Prasad, J. Zico Kolter, Zachary C. Lipton, Sivaraman Balakrishnan, Ruslan Salakhutdinov, Pradeep Ravikumar

In this paper, we present a detailed empirical study to characterize the heavy-tailed nature of the gradients of the PPO surrogate reward function.

Continuous Control

When Is Generalizable Reinforcement Learning Tractable?

no code implementations NeurIPS 2021 Dhruv Malik, Yuanzhi Li, Pradeep Ravikumar

Agents trained by reinforcement learning (RL) often fail to generalize beyond the environment they were trained in, even when presented with new scenarios that seem similar to the training environment.

reinforcement-learning Reinforcement Learning (RL) +1

Fundamental Limits and Tradeoffs in Invariant Representation Learning

no code implementations19 Dec 2020 Han Zhao, Chen Dan, Bryon Aragam, Tommi S. Jaakkola, Geoffrey J. Gordon, Pradeep Ravikumar

A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning invariant representations of the data that aim to achieve two competing goals: (a) maximize information or accuracy with respect to a target response, and (b) maximize invariance or independence with respect to a set of protected features (e. g., for fairness, privacy, etc).

Domain Adaptation Fairness +4

On Learning Ising Models under Huber's Contamination Model

no code implementations NeurIPS 2020 Adarsh Prasad, Vishwak Srinivasan, Sivaraman Balakrishnan, Pradeep Ravikumar

We study the problem of learning Ising models in a setting where some of the samples from the underlying distribution can be arbitrarily corrupted.

Generalized Boosting

no code implementations NeurIPS 2020 Arun Suggala, Bingbin Liu, Pradeep Ravikumar

Using thorough empirical evaluation, we show that our learning algorithms have superior performance over traditional additive boosting algorithms, as well as existing greedy learning techniques for DNNs.

Additive models Classification +2

The Risks of Invariant Risk Minimization

no code implementations ICLR 2021 Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski

We furthermore present the very first results in the non-linear regime: we demonstrate that IRM can fail catastrophically unless the test data are sufficiently similar to the training distribution--this is precisely the issue that it was intended to solve.

Out-of-Distribution Generalization

Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification

no code implementations29 Jun 2020 Chen Dan, Yuting Wei, Pradeep Ravikumar

In this paper, we provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by \cite{schmidt2018adversarially}.

Adversarial Robustness Classification +2

Learning Minimax Estimators via Online Learning

no code implementations19 Jun 2020 Kartik Gupta, Arun Sai Suggala, Adarsh Prasad, Praneeth Netrapalli, Pradeep Ravikumar

We view the problem of designing minimax estimators as finding a mixed strategy Nash equilibrium of a zero-sum game.

Sub-Seasonal Climate Forecasting via Machine Learning: Challenges, Analysis, and Advances

no code implementations14 Jun 2020 Sijie He, Xinyan Li, Timothy DelSole, Pradeep Ravikumar, Arindam Banerjee

Sub-seasonal climate forecasting (SSF) focuses on predicting key climate variables such as temperature and precipitation in the 2-week to 2-month time scales.

BIG-bench Machine Learning Feature Importance +1

Minimizing FLOPs to Learn Efficient Sparse Representations

1 code implementation ICLR 2020 Biswajit Paria, Chih-Kuan Yeh, Ian E. H. Yen, Ning Xu, Pradeep Ravikumar, Barnabás Póczos

Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification.

Quantization Representation Learning +1

Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

no code implementations ICML 2020 Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter

Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier.

Data Poisoning General Classification +1

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

2 code implementations ICLR 2020 Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.

Game Design for Eliciting Distinguishable Behavior

no code implementations NeurIPS 2019 Fan Yang, Liu Leqi, Yifan Wu, Zachary C. Lipton, Pradeep Ravikumar, William W. Cohen, Tom Mitchell

The ability to inferring latent psychological traits from human behavior is key to developing personalized human-interacting machine learning systems.

Automated Dependence Plots

2 code implementations2 Dec 2019 David I. Inouye, Liu Leqi, Joon Sik Kim, Bryon Aragam, Pradeep Ravikumar

To address these drawbacks, we formalize a method for automating the selection of interesting PDPs and extend PDPs beyond showing single features to show the model response along arbitrary directions, for example in raw feature space or a latent space arising from some generative model.

Model Selection Selection bias

Optimal Analysis of Subset-Selection Based L_p Low Rank Approximation

no code implementations30 Oct 2019 Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep Ravikumar

We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.

On Completeness-aware Concept-Based Explanations in Deep Neural Networks

2 code implementations NeurIPS 2020 Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Tomas Pfister, Pradeep Ravikumar

Next, we propose a concept discovery method that aims to infer a complete set of concepts that are additionally encouraged to be interpretable, which addresses the limitations of existing methods on concept explanations.

Learning Sparse Nonparametric DAGs

2 code implementations29 Sep 2019 Xun Zheng, Chen Dan, Bryon Aragam, Pradeep Ravikumar, Eric P. Xing

We develop a framework for learning sparse nonparametric directed acyclic graphs (DAGs) from data.

Causal Discovery

On Concept-Based Explanations in Deep Neural Networks

no code implementations25 Sep 2019 Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Pradeep Ravikumar, Tomas Pfister

Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts.

Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing

no code implementations25 Sep 2019 Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter

This paper considers label-flipping attacks, a type of data poisoning attack where an adversary relabels a small number of examples in a training set in order to degrade the performance of the resulting classifier.

Binary Classification Data Poisoning

A Unified Approach to Robust Mean Estimation

no code implementations1 Jul 2019 Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Ravikumar

Building on this connection, we provide a simple variant of recent computationally-efficient algorithms for mean estimation in Huber's model, which given our connection entails that the same efficient sample-pruning based estimators is simultaneously robust to heavy-tailed noise and Huber contamination.


no code implementations ICLR 2019 Chih-Kuan Yeh, Ian E. H. Yen, Hong-You Chen, Chun-Pei Yang, Shou-De Lin, Pradeep Ravikumar

State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.

Adaptive Hard Thresholding for Near-optimal Consistent Robust Regression

no code implementations19 Mar 2019 Arun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, Prateek Jain

We provide a nearly linear time estimator which consistently estimates the true regression vector, even with $1-o(1)$ fraction of corruptions.


On the (In)fidelity and Sensitivity for Explanations

2 code implementations27 Jan 2019 Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, Pradeep Ravikumar

We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.

Towards Aggregating Weighted Feature Attributions

no code implementations20 Jan 2019 Umang Bhatt, Pradeep Ravikumar, Jose M. F. Moura

Current approaches for explaining machine learning models fall into two distinct classes: antecedent event influence and value attribution.

Representer Point Selection for Explaining Deep Neural Networks

1 code implementation NeurIPS 2018 Chih-Kuan Yeh, Joon Sik Kim, Ian E. H. Yen, Pradeep Ravikumar

We propose to explain the predictions of a deep neural network, by pointing to the set of what we call representer points in the training set, for a given test point prediction.

Word Mover's Embedding: From Word2Vec to Document Embedding

1 code implementation EMNLP 2018 Lingfei Wu, Ian E. H. Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, Michael J. Witbrock

While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings.

Document Embedding General Classification +5

Sample Complexity of Nonparametric Semi-Supervised Learning

no code implementations NeurIPS 2018 Chen Dan, Liu Leqi, Bryon Aragam, Pradeep Ravikumar, Eric P. Xing

We study the sample complexity of semi-supervised learning (SSL) and introduce new assumptions based on the mismatch between a mixture model learned from unlabeled data and the true mixture model induced by the (unknown) class conditional distributions.

Binary Classification Classification +2

Loss Decomposition for Fast Learning in Large Output Spaces

no code implementations ICML 2018 Ian En-Hsu Yen, Satyen Kale, Felix Yu, Daniel Holtmann-Rice, Sanjiv Kumar, Pradeep Ravikumar

For problems with large output spaces, evaluation of the loss function and its gradient are expensive, typically taking linear time in the size of the output space.

Word Embeddings

Deep Density Destructors

1 code implementation ICML 2018 David Inouye, Pradeep Ravikumar

Unlike Gaussianization, our destructive transformation has the elegant property that the density function is equal to the absolute value of the Jacobian determinant.

Density Estimation

Revisiting Adversarial Risk

no code implementations7 Jun 2018 Arun Sai Suggala, Adarsh Prasad, Vaishnavh Nagarajan, Pradeep Ravikumar

Based on the modified definition, we show that there is no trade-off between adversarial and standard accuracies; there exist classifiers that are robust and achieve high standard accuracy.

Image Classification

Binary Classification with Karmic, Threshold-Quasi-Concave Metrics

no code implementations ICML 2018 Bowei Yan, Oluwasanmi Koyejo, Kai Zhong, Pradeep Ravikumar

Complex performance measures, beyond the popular measure of accuracy, are increasingly being used in the context of binary classification.

Binary Classification Classification +1

Robust Nonparametric Regression under Huber's $ε$-contamination Model

no code implementations26 May 2018 Simon S. Du, Yining Wang, Sivaraman Balakrishnan, Pradeep Ravikumar, Aarti Singh

We first show that a simple local binning median step can effectively remove the adversary noise and this median estimator is minimax optimal up to absolute constants over the H\"{o}lder function class with smoothness parameters smaller than or equal to 1.


Robust Estimation via Robust Gradient Estimation

no code implementations19 Feb 2018 Adarsh Prasad, Arun Sai Suggala, Sivaraman Balakrishnan, Pradeep Ravikumar

We provide a new computationally-efficient class of estimators for risk minimization.


D2KE: From Distance to Kernel and Embedding

no code implementations14 Feb 2018 Lingfei Wu, Ian En-Hsu Yen, Fangli Xu, Pradeep Ravikumar, Michael Witbrock

For many machine learning problem settings, particularly with structured inputs such as sequences or sets of objects, a distance measure between inputs can be specified more naturally than a feature representation.

Time Series Analysis

Identifiability of Nonparametric Mixture Models and Bayes Optimal Clustering

no code implementations12 Feb 2018 Bryon Aragam, Chen Dan, Eric P. Xing, Pradeep Ravikumar

Motivated by problems in data clustering, we establish general conditions under which families of nonparametric mixture models are identifiable, by introducing a novel framework involving clustering overfitted \emph{parametric} (i. e. misspecified) mixture models.

Clustering Nonparametric Clustering

Doubly Greedy Primal-Dual Coordinate Descent for Sparse Empirical Risk Minimization

no code implementations ICML 2017 Qi Lei, Ian En-Hsu Yen, Chao-yuan Wu, Inderjit S. Dhillon, Pradeep Ravikumar

We consider the popular problem of sparse empirical risk minimization with linear predictors and a large number of both features and observations.

Latent Feature Lasso

no code implementations ICML 2017 Ian En-Hsu Yen, Wei-Cheng Lee, Sung-En Chang, Arun Sai Suggala, Shou-De Lin, Pradeep Ravikumar

The latent feature model (LFM), proposed in (Griffiths \& Ghahramani, 2005), but possibly with earlier origins, is a generalization of a mixture model, where each instance is generated not from a single latent class but from a combination of latent features.

Ordinal Graphical Models: A Tale of Two Approaches

no code implementations ICML 2017 Arun Sai Suggala, Eunho Yang, Pradeep Ravikumar

While there have been some work on tractable approximations, these do not come with strong statistical guarantees, and moreover are relatively computationally expensive.

Vocal Bursts Valence Prediction

Online Classification with Complex Metrics

no code implementations23 Oct 2016 Bowei Yan, Oluwasanmi Koyejo, Kai Zhong, Pradeep Ravikumar

The proposed framework is general, as it applies to both batch and online learning, and to both linear and non-linear models.

Binary Classification Classification +1

A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

1 code implementation31 Aug 2016 David I. Inouye, Eunho Yang, Genevera I. Allen, Pradeep Ravikumar

The Poisson distribution has been widely studied and used for modeling univariate count-valued data.

Kernel Ridge Regression via Partitioning

no code implementations5 Aug 2016 Rashish Tandon, Si Si, Pradeep Ravikumar, Inderjit Dhillon

In this paper, we investigate a divide and conquer approach to Kernel Ridge Regression (KRR).

Clustering Generalization Bounds +1

Generalized Root Models: Beyond Pairwise Graphical Models for Univariate Exponential Families

1 code implementation2 Jun 2016 David I. Inouye, Pradeep Ravikumar, Inderjit S. Dhillon

As in the recent work with square root graphical (SQR) models [Inouye et al. 2016]---which was restricted to pairwise dependencies---we give the conditions of the parameters that are needed for normalization using the radial conditionals similar to the pairwise case [Inouye et al. 2016].

PD-Sparse : A Primal and Dual Sparse Approach to Extreme Multiclass and Multilabel Classification

1 code implementation ICML 2016 Ian En-Hsu Yen, Xiangru Huang, Pradeep Ravikumar, Kai Zhong, Inderjit S. Dhillon

In this work, we show that a margin-maximizing loss with l1 penalty, in case of Extreme Classification, yields extremely sparse solution both in primal and in dual without sacrificing the expressive power of predictor.

General Classification Text Classification

Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

no code implementations11 Mar 2016 David I. Inouye, Pradeep Ravikumar, Inderjit S. Dhillon

With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix---a condition akin to the positive definiteness of the Gaussian covariance matrix.

Exponential Family Matrix Completion under Structural Constraints

no code implementations15 Sep 2015 Suriya Gunasekar, Pradeep Ravikumar, Joydeep Ghosh

We consider the matrix completion problem of recovering a structured matrix from noisy and partial measurements.

Matrix Completion

Vector-Space Markov Random Fields via Exponential Families

1 code implementation19 May 2015 Wesley Tansey, Oscar Hernan Madrid Padilla, Arun Sai Suggala, Pradeep Ravikumar

Specifically, VS-MRFs are the joint graphical model distributions where the node-conditional distributions belong to generic exponential families with general vector space domains.

Optimal Decision-Theoretic Classification Using Non-Decomposable Performance Metrics

no code implementations7 May 2015 Nagarajan Natarajan, Oluwasanmi Koyejo, Pradeep Ravikumar, Inderjit S. Dhillon

We provide a general theoretical analysis of expected out-of-sample utility, also referred to as decision-theoretic classification, for non-decomposable binary classification metrics such as F-measure and Jaccard coefficient.

Binary Classification Classification +1

On the Information Theoretic Limits of Learning Ising Models

no code implementations NeurIPS 2014 Karthikeyan Shanmugam, Rashish Tandon, Alexandros G. Dimakis, Pradeep Ravikumar

We provide a general framework for computing lower-bounds on the sample complexity of recovering the underlying graphs of Ising models, given i. i. d samples.

A General Framework for Mixed Graphical Models

no code implementations2 Nov 2014 Eunho Yang, Pradeep Ravikumar, Genevera I. Allen, Yulia Baker, Ying-Wooi Wan, Zhandong Liu

"Mixed Data" comprising a large number of heterogeneous variables (e. g. count, binary, continuous, skewed continuous, among other data types) are prevalent in varied areas such as genomics and proteomics, imaging genetics, national security, social networking, and Internet advertising.

Proximal Quasi-Newton for Computationally Intensive L1-regularized M-estimators

no code implementations NeurIPS 2014 Kai Zhong, Ian E. H. Yen, Inderjit S. Dhillon, Pradeep Ravikumar

We consider the class of optimization problems arising from computationally intensive L1-regularized M-estimators, where the function or gradient values are very expensive to compute.

General Classification Structured Prediction

Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation

no code implementations NeurIPS 2011 Cho-Jui Hsieh, Matyas A. Sustik, Inderjit S. Dhillon, Pradeep Ravikumar

The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples.

On Graphical Models via Univariate Exponential Family Distributions

no code implementations17 Jan 2013 Eunho Yang, Pradeep Ravikumar, Genevera I. Allen, Zhandong Liu

Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.