no code implementations • 4 Nov 2024 • Aahlad Puli, Nhi Nguyen, Rajesh Ranganath

This definition implies, in contrast to encoding explanations, that non-encoding explanations contain all the informative inputs used to produce the explanation, giving them a "what you see is what you get" property, which makes them transparent and simple to use.

1 code implementation • 1 Nov 2024 • Adriel Saporta, Aahlad Puli, Mark Goldstein, Rajesh Ranganath

To develop Symile's objective, we derive a lower bound on total correlation, and show that Symile representations for any set of modalities form a sufficient statistic for predicting the remaining modalities.

no code implementations • 10 Jul 2024 • Raghav Singhal, Mark Goldstein, Rajesh Ranganath

The diffusion processes that are tractable center on linear processes with a Gaussian stationary distribution.

1 code implementation • 19 Jun 2024 • Lily H. Zhang, Rajesh Ranganath, Arya Tafvizi

Generative models of language exhibit impressive capabilities but still place non-negligible probability mass over undesirable outputs.

no code implementations • 6 Jun 2024 • Chen-Yu Yen, Raghav Singhal, Umang Sharma, Rajesh Ranganath, Sumit Chopra, Lerrel Pinto

Traditionally to accelerate an MR scan, image reconstruction from under-sampled k-space data is the method of choice.

no code implementations • 29 May 2024 • Angelica Chen, Sadhika Malladi, Lily H. Zhang, Xinyi Chen, Qiuyi Zhang, Rajesh Ranganath, Kyunghyun Cho

Preference learning algorithms (e. g., RLHF and DPO) are frequently used to steer LLMs to produce generations that are more preferred by humans, but our understanding of their inner workings is still limited.

no code implementations • 28 Feb 2024 • Laura Manduchi, Kushagra Pandey, Robert Bamler, Ryan Cotterell, Sina Däubener, Sophie Fellenz, Asja Fischer, Thomas Gärtner, Matthias Kirchler, Marius Kloft, Yingzhen Li, Christoph Lippert, Gerard de Melo, Eric Nalisnick, Björn Ommer, Rajesh Ranganath, Maja Rudolph, Karen Ullrich, Guy Van Den Broeck, Julia E Vogt, Yixin Wang, Florian Wenzel, Frank Wood, Stephan Mandt, Vincent Fortuin

The field of deep generative modeling has grown rapidly and consistently over the years.

no code implementations • 16 Jan 2024 • Abhijith Gandrakota, Lily Zhang, Aahlad Puli, Kyle Cranmer, Jennifer Ngadiuba, Rajesh Ranganath, Nhan Tran

Anomaly, or out-of-distribution, detection is a promising tool for aiding discoveries of new particles or processes in particle physics.

no code implementations • 2 Dec 2023 • Wouter A. C. van Amsterdam, Nan van Geloven, Jesse H. Krijthe, Rajesh Ranganath, Giovanni Ciná

These models are harmful self-fulfilling prophecies: their deployment harms a group of patients but the worse outcome of these patients does not invalidate the predictive power of the model.

1 code implementation • 21 Nov 2023 • Boyang Yu, Aakash Kaku, Kangning Liu, Avinash Parnandi, Emily Fokas, Anita Venkatesan, Natasha Pandit, Rajesh Ranganath, Heidi Schambra, Carlos Fernandez-Granda

We applied the COBRA score to address a key limitation of current clinical evaluation of upper-body impairment in stroke patients.

1 code implementation • 5 Oct 2023 • Michael S. Albergo, Mark Goldstein, Nicholas M. Boffi, Rajesh Ranganath, Eric Vanden-Eijnden

In this work, using the framework of stochastic interpolants, we formalize how to \textit{couple} the base and the target densities, whereby samples from the base are computed conditionally given samples from the target in a way that is different from (but does preclude) incorporating information about class labels or continuous embeddings.

no code implementations • 24 Aug 2023 • Aahlad Puli, Lily Zhang, Yoav Wald, Rajesh Ranganath

However, even when the stable feature determines the label in the training distribution and the shortcut does not provide any additional information, like in perception tasks, default-ERM still exhibits shortcut learning.

1 code implementation • 8 Aug 2023 • Rhys Compton, Lily Zhang, Aahlad Puli, Rajesh Ranganath

In machine learning, incorporating more data is often seen as a reliable strategy for improving model performance; this work challenges that notion by demonstrating that the addition of external datasets in many cases can hurt the resulting model's performance.

1 code implementation • 1 Jun 2023 • Shi-ang Qi, Neeraj Kumar, Mahtab Farrokh, Weijie Sun, Li-Hao Kuan, Rajesh Ranganath, Ricardo Henao, Russell Greiner

One straightforward metric to evaluate a survival prediction model is based on the Mean Absolute Error (MAE) -- the average of the absolute difference between the time predicted by the model and the true event time, over all subjects.

no code implementations • 22 Mar 2023 • Yuxuan Hu, Albert Lui, Mark Goldstein, Mukund Sudarshan, Andrea Tinsay, Cindy Tsui, Samuel Maidman, John Medamana, Neil Jethani, Aahlad Puli, Vuthy Nguy, Yindalon Aphinyanaphongs, Nicholas Kiefer, Nathaniel Smilowitz, James Horowitz, Tania Ahuja, Glenn I Fishman, Judith Hochman, Stuart Katz, Samuel Bernard, Rajesh Ranganath

We developed a deep learning-based risk stratification tool, called CShock, for patients admitted into the cardiac ICU with acute decompensated heart failure and/or myocardial infarction to predict onset of cardiogenic shock.

no code implementations • 24 Feb 2023 • Neil Jethani, Adriel Saporta, Rajesh Ranganath

Feature attribution methods identify which features of an input most influence a model's output.

1 code implementation • 18 Feb 2023 • Nihal Murali, Aahlad Puli, Ke Yu, Rajesh Ranganath, Kayhan Batmanghelich

(3) We empirically show that the harmful spurious features can be detected by observing the learning dynamics of the DNN's early layers.

no code implementations • 14 Feb 2023 • Raghav Singhal, Mark Goldstein, Rajesh Ranganath

For example, extending the inference process with auxiliary variables leads to improved sample quality.

no code implementations • 8 Feb 2023 • Lily H. Zhang, Rajesh Ranganath

The detection of shared-nuisance out-of-distribution (SN-OOD) inputs is particularly relevant in real-world applications, as anomalies and in-distribution inputs tend to be captured in the same settings during deployment.

no code implementations • 27 Jan 2023 • Raghav Singhal, Mukund Sudarshan, Anish Mahishi, Sri Kaushik, Luke Ginocchio, Angela Tong, Hersh Chandarana, Daniel K. Sodickson, Rajesh Ranganath, Sumit Chopra

We hypothesise that the disease classification task can be solved using a very small tailored subset of k-space data, compared to image reconstruction.

no code implementations • 4 Oct 2022 • Aahlad Puli, Nitish Joshi, Yoav Wald, He He, Rajesh Ranganath

In prediction tasks, there exist features that are related to the label in the same way across different settings for that task; these are semantic features or semantics.

no code implementations • 15 Sep 2022 • Wouter A. C. van Amsterdam, Pim A. de Jong, Joost J. C. Verhoeff, Tim Leiner, Rajesh Ranganath

In cancer research there is much interest in building and validating outcome predicting outcomes to support treatment decisions.

1 code implementation • 23 Aug 2022 • Xintian Han, Mark Goldstein, Rajesh Ranganath

Survival MDN applies an invertible positive function to the output of Mixture Density Networks (MDNs).

no code implementations • 18 Aug 2022 • Mukund Sudarshan, Aahlad Manas Puli, Wesley Tansey, Rajesh Ranganath

DIET tests the marginal independence of two random variables: $F(x \mid z)$ and $F(y \mid z)$ where $F(\cdot \mid z)$ is a conditional cumulative distribution function (CDF).

1 code implementation • 23 Jun 2022 • Lily H. Zhang, Veronica Tozzo, John M. Higgins, Rajesh Ranganath

However, we show that existing permutation invariant architectures, Deep Sets and Set Transformer, can suffer from vanishing or exploding gradients when they are deep.

no code implementations • 5 May 2022 • Neil Jethani, Aahlad Puli, Hao Zhang, Leonid Garber, Lior Jankelson, Yindalon Aphinyanaphongs, Rajesh Ranganath

We found ECG-based assessment outperforms the ADA Risk test, achieving a higher area under the curve (0. 80 vs. 0. 68) and positive predictive value (13% vs. 9%) -- 2. 6 times the prevalence of diabetes in the cohort.

no code implementations • 2 Dec 2021 • David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna

We introduce quantile filtered imitation learning (QFIL), a novel policy improvement operator designed for offline reinforcement learning.

1 code implementation • 1 Dec 2021 • Mark Goldstein, Jörn-Henrik Jacobsen, Olina Chau, Adriel Saporta, Aahlad Puli, Rajesh Ranganath, Andrew C. Miller

Enforcing such independencies requires nuisances to be observed during training.

1 code implementation • NeurIPS 2021 • Xintian Han, Mark Goldstein, Aahlad Puli, Thomas Wies, Adler J Perotte, Rajesh Ranganath

When the loss is proper, we show that the games always have the true failure and censoring distributions as a stationary point.

5 code implementations • ICLR 2022 • Neil Jethani, Mukund Sudarshan, Ian Covert, Su-In Lee, Rajesh Ranganath

Shapley values are widely used to explain black-box models, but they are costly to calculate because they require many model evaluations.

no code implementations • 14 Jul 2021 • Lily H. Zhang, Mark Goldstein, Rajesh Ranganath

Deep generative models (DGMs) seem a natural fit for detecting out-of-distribution (OOD) inputs, but such models have been shown to assign higher probabilities or densities to OOD images than images from the training distribution.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

1 code implementation • ICLR 2022 • Aahlad Puli, Lily H. Zhang, Eric K. Oermann, Rajesh Ranganath

NURD finds a representation from this set that is most informative of the label under the nuisance-randomized distribution, and we prove that this representation achieves the highest performance regardless of the nuisance-label relationship.

1 code implementation • NeurIPS 2021 • David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna

In addition, we hypothesize that the strong performance of the one-step algorithm is due to a combination of favorable structure in the environment and behavior policy.

1 code implementation • 2 Mar 2021 • Neil Jethani, Mukund Sudarshan, Yindalon Aphinyanaphongs, Rajesh Ranganath

While the need for interpretable machine learning has been established, many common approaches are slow, lack fidelity, or hard to evaluate.

no code implementations • NeurIPS 2020 • Aahlad Puli, Adler J. Perotte, Rajesh Ranganath

Causal inference relies on two fundamental assumptions: ignorability and positivity.

1 code implementation • NeurIPS 2020 • Mark Goldstein, Xintian Han, Aahlad Puli, Adler J. Perotte, Rajesh Ranganath

A survival model's calibration can be measured using, for instance, distributional calibration (D-CALIBRATION) [Haider et al., 2020] which computes the squared difference between the observed and predicted number of events within different time intervals.

no code implementations • NeurIPS 2020 • Aahlad Manas Puli, Rajesh Ranganath

Causal effect estimation relies on separating the variation in the outcome into parts due to the treatment and due to the confounders.

no code implementations • 23 Sep 2020 • Irene Y. Chen, Shalmali Joshi, Marzyeh Ghassemi, Rajesh Ranganath

Machine learning can be used to make sense of healthcare data.

1 code implementation • NeurIPS 2020 • Mukund Sudarshan, Wesley Tansey, Rajesh Ranganath

Predictive modeling often uses black box machine learning methods, such as deep neural networks, to achieve state-of-the-art performance.

1 code implementation • 27 Jun 2020 • David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna

We show that this discrepancy is due to the \emph{action-stability} of their objectives.

no code implementations • 9 Jan 2020 • Amelia J. Averitt, Natnicha Vanitchanant, Rajesh Ranganath, Adler J. Perotte

Effect estimates, such as the average treatment effect (ATE), are then estimated as expectations under the reweighted or matched distribution, P .

1 code implementation • NeurIPS 2019 • Dieterich Lawson, George Tucker, Bo Dai, Rajesh Ranganath

Motivated by this, we consider the sampler-induced distribution as the model of interest and maximize the likelihood of this model.

no code implementations • 25 Sep 2019 • Mukund Sudarshan, Aahlad Manas Puli, Lakshmi Subramanian, Sriram Sankararaman, Rajesh Ranganath

We show that f-divergences provide a broad class of proper test statistics.

no code implementations • 25 Sep 2019 • Mark Goldstein*, Xintian Han*, Rajesh Ranganath

GATO is constructed so that part of its hidden state does not have vanishing gradients, regardless of sequence length.

no code implementations • 2 Aug 2019 • Gemma E. Moran, David M. Blei, Rajesh Ranganath

However, PPCs use the data twice -- both to calculate the posterior predictive and to evaluate it -- which can lead to overconfident assessments of the quality of a model.

no code implementations • 8 Jul 2019 • Aahlad Manas Puli, Rajesh Ranganath

Causal effect estimation relies on separating the variation in the outcome into parts due to the treatment and due to the confounders.

no code implementations • 2 Jul 2019 • Matthew B. A. McDermott, Shirly Wang, Nikki Marinsek, Rajesh Ranganath, Marzyeh Ghassemi, Luca Foschini

Machine learning algorithms designed to characterize, monitor, and intervene on human health (ML4H) are expected to perform safely and reliably when operating at scale, potentially outside strict human supervision.

no code implementations • 13 May 2019 • Xintian Han, Yuxuan Hu, Luca Foschini, Larry Chinitz, Lior Jankelson, Rajesh Ranganath

For this model, we utilized a new technique to generate smoothed examples to produce signals that are 1) indistinguishable to cardiologists from the original examples and 2) incorrectly classified by the neural network.

2 code implementations • 10 Apr 2019 • Kexin Huang, Jaan Altosaar, Rajesh Ranganath

Clinical notes contain information about patients that goes beyond structured data like lab values and medications.

no code implementations • 9 Apr 2019 • Raghav Singhal, Xintian Han, Saad Lahlou, Rajesh Ranganath

We introduce kernelized complete conditional Stein discrepancies (KCC-SDs).

no code implementations • ICLR Workshop DeepGenStruct 2019 • Dieterich Lawson, George Tucker, Bo Dai, Rajesh Ranganath

The success of enriching the variational family with auxiliary latent variables motivates applying the same techniques to the generative model.

no code implementations • 25 Mar 2019 • Zenna Tavares, Xin Zhang, Edgar Minaysan, Javier Burroni, Rajesh Ranganath, Armando Solar Lezama

The need to condition distributional properties such as expectation, variance, and entropy arises in algorithmic fairness, model simplification, robustness and many other areas.

no code implementations • 8 Mar 2019 • Fredrik D. Johansson, David Sontag, Rajesh Ranganath

In this work, we give generalization bounds for unsupervised domain adaptation that hold for any representation function by acknowledging the cost of non-invertibility.

1 code implementation • 7 Mar 2019 • Da Tang, Rajesh Ranganath

Unlike traditional natural gradients for variational inference, this natural gradient accounts for the relationship between model parameters and variational parameters.

no code implementations • 16 Jan 2019 • Zenna Tavares, Javier Burroni, Edgar Minaysan, Armando Solar Lezama, Rajesh Ranganath

We develop a likelihood free inference procedure for conditioning a probabilistic model on a predicate.

no code implementations • 1 Jun 2018 • Marzyeh Ghassemi, Tristan Naumann, Peter Schulam, Andrew L. Beam, Irene Y. Chen, Rajesh Ranganath

Modern electronic health records (EHRs) provide data to answer clinically meaningful questions.

no code implementations • 21 May 2018 • Rajesh Ranganath, Adler Perotte

Together, these assumptions lead to a confounder estimator regularized by mutual information.

no code implementations • ICML 2018 • Adji B. Dieng, Rajesh Ranganath, Jaan Altosaar, David M. Blei

On the Penn Treebank, the method with Noisin more quickly reaches state-of-the-art performance.

no code implementations • ICLR 2018 • Adji B. Dieng, Jaan Altosaar, Rajesh Ranganath, David M. Blei

We develop a noise-based regularization method for RNNs.

no code implementations • NeurIPS 2017 • Adji Bousso Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, David Blei

In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes $D_{\chi}(p || q)$, the $\chi$-divergence from $p$ to $q$.

1 code implementation • 31 May 2017 • Christian A. Naesseth, Scott W. Linderman, Rajesh Ranganath, David M. Blei

The success of variational approaches depends on (i) formulating a flexible parametric family of distributions, and (ii) optimizing the parameters to find the member of this family that most closely approximates the exact posterior.

1 code implementation • 24 May 2017 • Jaan Altosaar, Rajesh Ranganath, David M. Blei

Consequently, PVI is less sensitive to initialization and optimization quirks and finds better local optima.

no code implementations • NeurIPS 2017 • Dustin Tran, Rajesh Ranganath, David M. Blei

Implicit probabilistic models are a flexible class of models defined by a simulation process for data.

no code implementations • 1 Nov 2016 • Adji B. Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, David M. Blei

In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes $D_{\chi}(p || q)$, the $\chi$-divergence from $p$ to $q$.

no code implementations • NeurIPS 2016 • Rajesh Ranganath, Jaan Altosaar, Dustin Tran, David M. Blei

Though this divergence has been widely used, the resultant posterior approximation can suffer from undesirable statistical properties.

no code implementations • 6 Aug 2016 • Rajesh Ranganath, Adler Perotte, Noémie Elhadad, David Blei

The electronic health record (EHR) provides an unprecedented opportunity to build actionable tools to support physicians at the point of care.

4 code implementations • 2 Mar 2016 • Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, David M. Blei

Probabilistic modeling is iterative.

no code implementations • NeurIPS 2015 • James Mcinerney, Rajesh Ranganath, David Blei

Many modern data analysis problems involve inferences from streaming data.

no code implementations • 20 Nov 2015 • Dustin Tran, Rajesh Ranganath, David M. Blei

Variational inference is a powerful tool for approximate inference, and it has been recently applied for representation learning with deep generative models.

1 code implementation • 7 Nov 2015 • Rajesh Ranganath, Dustin Tran, David M. Blei

We study HVMs on a variety of deep discrete latent variable models.

no code implementations • 15 Sep 2015 • Laurent Charlin, Rajesh Ranganath, James McInerney, David M. Blei

Models for recommender systems use latent factors to explain the preferences and behaviors of users with respect to a set of items (e. g., movies, books, academic papers).

2 code implementations • 19 Jul 2015 • James McInerney, Rajesh Ranganath, David M. Blei

Many modern data analysis problems involve inferences from streaming data.

no code implementations • 2 Jul 2015 • Rajesh Ranganath, David Blei

We develop correlated random measures, random measures where the atom weights can exhibit a flexible pattern of dependence, and use them to develop powerful hierarchical Bayesian nonparametric models.

no code implementations • NeurIPS 2015 • Alp Kucukelbir, Rajesh Ranganath, Andrew Gelman, David M. Blei

With ADVI we can use variational inference on any model we write in Stan.

no code implementations • 10 Nov 2014 • Rajesh Ranganath, Linpeng Tang, Laurent Charlin, David M. Blei

We describe \textit{deep exponential families} (DEFs), a class of latent variable models that are inspired by the hidden structures used in deep neural networks.

no code implementations • 7 Nov 2014 • Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, David Blei

Lastly, we develop local variational tempering, which assigns a latent temperature to each data point; this allows for dynamic annealing that varies across data.

2 code implementations • 31 Dec 2013 • Rajesh Ranganath, Sean Gerrish, David M. Blei

We evaluate our method against the corresponding black box sampling based methods.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.