Search Results for author: David I. Inouye

Found 24 papers, 11 papers with code

Decoupled Vertical Federated Learning for Practical Training on Vertically Partitioned Data

no code implementations6 Mar 2024 Avi Amalanshu, Yash Sirvi, David I. Inouye

Vertical Federated Learning (VFL) is an emergent distributed machine learning paradigm wherein owners of disjoint features of a common set of entities collaborate to learn a global model without sharing data.

Vertical Federated Learning

StarCraftImage: A Dataset For Prototyping Spatial Reasoning Methods For Multi-Agent Environments

no code implementations CVPR 2023 Sean Kulinski, Nicholas R. Waytowich, James Z. Hare, David I. Inouye

Spatial reasoning tasks in multi-agent environments such as event prediction, agent type identification, or missing data imputation are important for multiple applications (e. g., autonomous surveillance over sensor networks and subtasks for reinforcement learning (RL)).

Imputation Reinforcement Learning (RL) +2

Fault-Tolerant Vertical Federated Learning on Dynamic Networks

no code implementations27 Dec 2023 Surojit Ganguli, Zeyu Zhou, Christopher G. Brinton, David I. Inouye

Vertical Federated learning (VFL) is a class of FL where each client shares the same sample space but only holds a subset of the features.

Vertical Federated Learning

Towards Practical Non-Adversarial Distribution Alignment via Variational Bounds

no code implementations30 Oct 2023 Ziyu Gong, Ben Usman, Han Zhao, David I. Inouye

Distribution alignment can be used to learn invariant representations with applications in fairness and robustness.

Fairness Representation Learning

Benchmarking Algorithms for Federated Domain Generalization

1 code implementation11 Jul 2023 Ruqi Bai, Saurabh Bagchi, David I. Inouye

We then apply our methodology to evaluate 14 Federated DG methods, which include centralized DG methods adapted to the FL context, FL methods that handle client heterogeneity, and methods designed specifically for Federated DG.

Benchmarking Domain Generalization +1

Towards Characterizing Domain Counterfactuals For Invertible Latent Causal Models

1 code implementation20 Jun 2023 Zeyu Zhou, Ruqi Bai, Sean Kulinski, Murat Kocaoglu, David I. Inouye

Answering counterfactual queries has important applications such as explainability, robustness, and fairness but is challenging when the causal variables are unobserved and the observations are non-linear mixtures of these latent variables, such as pixels in images.

Causal Discovery counterfactual +1

Towards Enhanced Controllability of Diffusion Models

no code implementations28 Feb 2023 Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Khuc, Krishna Kumar Singh, Jingwan Lu, David I. Inouye, Ajinkya Kale

We rely on the inductive bias of the progressive denoising process of diffusion models to encode pose/layout information in the spatial structure mask and semantic/style information in the style code.

Denoising Image Manipulation +3

Towards Explaining Distribution Shifts

1 code implementation19 Oct 2022 Sean Kulinski, David I. Inouye

We derive our interpretable mappings from a relaxation of optimal transport, where the candidate mappings are restricted to a set of interpretable mappings.

Cooperative Distribution Alignment via JSD Upper Bound

1 code implementation5 Jul 2022 Wonwoong Cho, Ziyu Gong, David I. Inouye

Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to a shared aligned distribution given only samples from each distribution.

Unsupervised Domain Adaptation

Discrete Tree Flows via Tree-Structured Permutations

no code implementations ICML Workshop INNF 2021 Mai Elkady, Jim Lim, David I. Inouye

While normalizing flows for continuous data have been extensively researched, flows for discrete data have only recently been explored.

Resilience to Multiple Attacks via Adversarially Trained MIMO Ensembles

no code implementations29 Sep 2021 Ruqi Bai, David I. Inouye, Saurabh Bagchi

We show that ensemble methods can improve adversarial robustness to multiple attacks if the ensemble is \emph{adversarially diverse}, which is defined by two properties: 1) the sub-models are adversarially robust themselves and yet 2) adversarial attacks do not transfer easily between sub-models.

Adversarial Robustness

Feature Shift Detection: Localizing Which Features Have Shifted via Conditional Distribution Tests

2 code implementations NeurIPS 2020 Sean Kulinski, Saurabh Bagchi, David I. Inouye

While previous distribution shift detection approaches can identify if a shift has occurred, these approaches cannot localize which specific features have caused a distribution shift -- a critical step in diagnosing or fixing any underlying issue.

Time Series Time Series Analysis

Why be adversarial? Let's cooperate!: Cooperative Dataset Alignment via JSD Upper Bound

no code implementations ICML Workshop INNF 2021 Wonwoong Cho, Ziyu Gong, David I. Inouye

Unsupervised dataset alignment estimates a transformation that maps two or more source domains to a shared aligned domain given only the domain datasets.

Unsupervised Domain Adaptation

Iterative Alignment Flows

no code implementations15 Apr 2021 Zeyu Zhou, Ziyu Gong, Pradeep Ravikumar, David I. Inouye

Existing flow-based approaches estimate multiple flows independently, which is equivalent to learning multiple full generative models.

Unsupervised Domain Adaptation

Shapley Explanation Networks

2 code implementations ICLR 2021 Rui Wang, Xiaoqian Wang, David I. Inouye

This intrinsic explanation approach enables layer-wise explanations, explanation regularization of the model during training, and fast explanation computation at test time.

Exploring Adversarial Examples via Invertible Neural Networks

no code implementations24 Dec 2020 Ruqi Bai, Saurabh Bagchi, David I. Inouye

We propose a new way of achieving such understanding through a recent development, namely, invertible neural models with Lipschitz continuous mapping functions from the input to the output.

Automated Dependence Plots

2 code implementations2 Dec 2019 David I. Inouye, Liu Leqi, Joon Sik Kim, Bryon Aragam, Pradeep Ravikumar

To address these drawbacks, we formalize a method for automating the selection of interesting PDPs and extend PDPs beyond showing single features to show the model response along arbitrary directions, for example in raw feature space or a latent space arising from some generative model.

Model Selection Selection bias

On the (In)fidelity and Sensitivity of Explanations

1 code implementation NeurIPS 2019 Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar

We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.

On the (In)fidelity and Sensitivity for Explanations

2 code implementations27 Jan 2019 Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, Pradeep Ravikumar

We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.

A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

1 code implementation31 Aug 2016 David I. Inouye, Eunho Yang, Genevera I. Allen, Pradeep Ravikumar

The Poisson distribution has been widely studied and used for modeling univariate count-valued data.

Generalized Root Models: Beyond Pairwise Graphical Models for Univariate Exponential Families

1 code implementation2 Jun 2016 David I. Inouye, Pradeep Ravikumar, Inderjit S. Dhillon

As in the recent work with square root graphical (SQR) models [Inouye et al. 2016]---which was restricted to pairwise dependencies---we give the conditions of the parameters that are needed for normalization using the radial conditionals similar to the pairwise case [Inouye et al. 2016].

Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

no code implementations11 Mar 2016 David I. Inouye, Pradeep Ravikumar, Inderjit S. Dhillon

With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix---a condition akin to the positive definiteness of the Gaussian covariance matrix.

Fixed-Length Poisson MRF: Adding Dependencies to the Multinomial

no code implementations NeurIPS 2015 David I. Inouye, Pradeep K. Ravikumar, Inderjit S. Dhillon

We show the effectiveness of our LPMRF distribution over Multinomial models by evaluating the test set perplexity on a dataset of abstracts and Wikipedia.

Topic Models

Capturing Semantically Meaningful Word Dependencies with an Admixture of Poisson MRFs

no code implementations NeurIPS 2014 David I. Inouye, Pradeep K. Ravikumar, Inderjit S. Dhillon

We develop a fast algorithm for the Admixture of Poisson MRFs (APM) topic model and propose a novel metric to directly evaluate this model.

Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.