no code implementations • 15 Mar 2025 • Wonwoong Cho, Yan-Ying Chen, Matthew Klenk, David I. Inouye, Yanxia Zhang
To address this, we introduce the Attribute (Att) Adapter, a novel plug-and-play module designed to enable fine-grained, multi-attributes control in pretrained diffusion models.
no code implementations • 20 Nov 2024 • Mai Elkady, Thu Bui, Bruno Ribeiro, David I. Inouye
There has been a growing excitement that implicit graph generative models could be used to design or discover new molecules for medicine or material design.
1 code implementation • 3 Sep 2024 • Zeyu Zhou, Tianci Liu, Ruqi Bai, Jing Gao, Murat Kocaoglu, David I. Inouye
To fill in this gap, we provide a theoretical study on the inherent trade-off between CF and predictive performance in a model-agnostic manner.
no code implementations • 6 Mar 2024 • Avi Amalanshu, Yash Sirvi, David I. Inouye
Vertical Federated Learning (VFL) is an emergent distributed machine learning paradigm for collaborative learning between clients who have disjoint features of common entities.
no code implementations • CVPR 2023 • Sean Kulinski, Nicholas R. Waytowich, James Z. Hare, David I. Inouye
Spatial reasoning tasks in multi-agent environments such as event prediction, agent type identification, or missing data imputation are important for multiple applications (e. g., autonomous surveillance over sensor networks and subtasks for reinforcement learning (RL)).
no code implementations • 27 Dec 2023 • Surojit Ganguli, Zeyu Zhou, Christopher G. Brinton, David I. Inouye
Vertical Federated learning (VFL) is a class of FL where each client shares the same set of samples but only owns a subset of the features.
no code implementations • 30 Oct 2023 • Ziyu Gong, Ben Usman, Han Zhao, David I. Inouye
Distribution matching can be used to learn invariant representations with applications in fairness and robustness.
1 code implementation • 11 Jul 2023 • Ruqi Bai, Saurabh Bagchi, David I. Inouye
We then apply our methodology to evaluate 14 Federated DG methods, which include centralized DG methods adapted to the FL context, FL methods that handle client heterogeneity, and methods designed specifically for Federated DG.
1 code implementation • 20 Jun 2023 • Zeyu Zhou, Ruqi Bai, Sean Kulinski, Murat Kocaoglu, David I. Inouye
Answering counterfactual queries has important applications such as explainability, robustness, and fairness but is challenging when the causal variables are unobserved and the observations are non-linear mixtures of these latent variables, such as pixels in images.
no code implementations • 28 Feb 2023 • Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Khuc, Krishna Kumar Singh, Jingwan Lu, David I. Inouye, Ajinkya Kale
Second, we propose timestep-dependent weight scheduling for content and style features to further improve the performance.
1 code implementation • 19 Oct 2022 • Sean Kulinski, David I. Inouye
We derive our interpretable mappings from a relaxation of optimal transport, where the candidate mappings are restricted to a set of interpretable mappings.
1 code implementation • 5 Jul 2022 • Wonwoong Cho, Ziyu Gong, David I. Inouye
Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to a shared aligned distribution given only samples from each distribution.
no code implementations • ICML Workshop INNF 2021 • Mai Elkady, Jim Lim, David I. Inouye
While normalizing flows for continuous data have been extensively researched, flows for discrete data have only recently been explored.
no code implementations • 29 Sep 2021 • Ruqi Bai, David I. Inouye, Saurabh Bagchi
We show that ensemble methods can improve adversarial robustness to multiple attacks if the ensemble is \emph{adversarially diverse}, which is defined by two properties: 1) the sub-models are adversarially robust themselves and yet 2) adversarial attacks do not transfer easily between sub-models.
2 code implementations • NeurIPS 2020 • Sean Kulinski, Saurabh Bagchi, David I. Inouye
While previous distribution shift detection approaches can identify if a shift has occurred, these approaches cannot localize which specific features have caused a distribution shift -- a critical step in diagnosing or fixing any underlying issue.
no code implementations • ICML Workshop INNF 2021 • Wonwoong Cho, Ziyu Gong, David I. Inouye
Unsupervised dataset alignment estimates a transformation that maps two or more source domains to a shared aligned domain given only the domain datasets.
no code implementations • 15 Apr 2021 • Zeyu Zhou, Ziyu Gong, Pradeep Ravikumar, David I. Inouye
Existing flow-based approaches estimate multiple flows independently, which is equivalent to learning multiple full generative models.
2 code implementations • ICLR 2021 • Rui Wang, Xiaoqian Wang, David I. Inouye
This intrinsic explanation approach enables layer-wise explanations, explanation regularization of the model during training, and fast explanation computation at test time.
no code implementations • 24 Dec 2020 • Ruqi Bai, Saurabh Bagchi, David I. Inouye
We propose a new way of achieving such understanding through a recent development, namely, invertible neural models with Lipschitz continuous mapping functions from the input to the output.
2 code implementations • 2 Dec 2019 • David I. Inouye, Liu Leqi, Joon Sik Kim, Bryon Aragam, Pradeep Ravikumar
To address these drawbacks, we formalize a method for automating the selection of interesting PDPs and extend PDPs beyond showing single features to show the model response along arbitrary directions, for example in raw feature space or a latent space arising from some generative model.
1 code implementation • NeurIPS 2019 • Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar
We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.
2 code implementations • 27 Jan 2019 • Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, Pradeep Ravikumar
We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.
1 code implementation • 31 Aug 2016 • David I. Inouye, Eunho Yang, Genevera I. Allen, Pradeep Ravikumar
The Poisson distribution has been widely studied and used for modeling univariate count-valued data.
1 code implementation • 2 Jun 2016 • David I. Inouye, Pradeep Ravikumar, Inderjit S. Dhillon
As in the recent work with square root graphical (SQR) models [Inouye et al. 2016]---which was restricted to pairwise dependencies---we give the conditions of the parameters that are needed for normalization using the radial conditionals similar to the pairwise case [Inouye et al. 2016].
no code implementations • 11 Mar 2016 • David I. Inouye, Pradeep Ravikumar, Inderjit S. Dhillon
With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix---a condition akin to the positive definiteness of the Gaussian covariance matrix.
no code implementations • NeurIPS 2015 • David I. Inouye, Pradeep K. Ravikumar, Inderjit S. Dhillon
We show the effectiveness of our LPMRF distribution over Multinomial models by evaluating the test set perplexity on a dataset of abstracts and Wikipedia.
no code implementations • NeurIPS 2014 • David I. Inouye, Pradeep K. Ravikumar, Inderjit S. Dhillon
We develop a fast algorithm for the Admixture of Poisson MRFs (APM) topic model and propose a novel metric to directly evaluate this model.