Search Results for author: Elliot Creager

Found 17 papers, 8 papers with code

Online Algorithmic Recourse by Collective Action

no code implementations29 Dec 2023 Elliot Creager, Richard Zemel

Research on algorithmic recourse typically considers how an individual can reasonably change an unfavorable automated decision when interacting with a fixed decision-making system.

Decision Making

Out of the Ordinary: Spectrally Adapting Regression for Covariate Shift

no code implementations29 Dec 2023 Benjamin Eyre, Elliot Creager, David Madras, Vardan Papyan, Richard Zemel

Designing deep neural network classifiers that perform robustly on distributions differing from the available training data is an active area of machine learning research.

Out-of-Distribution Generalization regression

Robust Machine Learning by Transforming and Augmenting Imperfect Training Data

no code implementations19 Dec 2023 Elliot Creager

Here we observe that insofar as standard training methods tend to learn such features, this propensity can be leveraged to search for partitions of training data that expose this inconsistency, ultimately promoting learning algorithms invariant to spurious features.

counterfactual Data Augmentation +1

Remembering to Be Fair: Non-Markovian Fairness in Sequential Decision Making

no code implementations8 Dec 2023 Parand A. Alamdari, Toryn Q. Klassen, Elliot Creager, Sheila A. McIlraith

In this paper we investigate the notion of fairness in the context of sequential decision making where multiple stakeholders can be affected by the outcomes of decisions.

Decision Making Fairness

SURFSUP: Learning Fluid Simulation for Novel Surfaces

no code implementations ICCV 2023 Arjun Mani, Ishaan Preetam Chandratreya, Elliot Creager, Carl Vondrick, Richard Zemel

Modeling the mechanics of fluid in complex scenes is vital to applications in design, graphics, and robotics.

MoCoDA: Model-based Counterfactual Data Augmentation

1 code implementation20 Oct 2022 Silviu Pitis, Elliot Creager, Ajay Mandlekar, Animesh Garg

To this end, we show that (1) known local structure in the environment transitions is sufficient for an exponential reduction in the sample complexity of training a dynamics model, and (2) a locally factored dynamics model provably generalizes out-of-distribution to unseen states and actions.

counterfactual Data Augmentation +2

Fairness and Robustness in Invariant Learning: A Case Study in Toxicity Classification

1 code implementation12 Nov 2020 Robert Adragna, Elliot Creager, David Madras, Richard Zemel

Robustness is of central importance in machine learning and has given rise to the fields of domain generalization and invariant learning, which are concerned with improving performance on a test distribution distinct from but related to the training distribution.

BIG-bench Machine Learning Causal Discovery +3

Environment Inference for Invariant Learning

1 code implementation14 Oct 2020 Elliot Creager, Jörn-Henrik Jacobsen, Richard Zemel

Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness.

Domain Generalization Fairness +1

Exchanging Lessons Between Algorithmic Fairness and Domain Generalization

no code implementations28 Sep 2020 Elliot Creager, Joern-Henrik Jacobsen, Richard Zemel

Developing learning approaches that are not overly sensitive to the training distribution is central to research on domain- or out-of-distribution generalization, robust optimization and fairness.

Domain Generalization Fairness +1

Optimizing Long-term Social Welfare in Recommender Systems: A Constrained Matching Approach

no code implementations ICML 2020 Martin Mladenov, Elliot Creager, Omer Ben-Porat, Kevin Swersky, Richard Zemel, Craig Boutilier

We develop several scalable techniques to solve the matching problem, and also draw connections to various notions of user regret and fairness, arguing that these outcomes are fairer in a utilitarian sense.

Fairness Recommendation Systems

Counterfactual Data Augmentation using Locally Factored Dynamics

1 code implementation NeurIPS 2020 Silviu Pitis, Elliot Creager, Animesh Garg

Many dynamic processes, including common scenarios in robotic control and reinforcement learning (RL), involve a set of interacting subprocesses.

counterfactual Data Augmentation +5

Causal Modeling for Fairness in Dynamical Systems

1 code implementation ICML 2020 Elliot Creager, David Madras, Toniann Pitassi, Richard Zemel

In many application areas---lending, education, and online recommenders, for example---fairness and equity concerns emerge when a machine learning system interacts with a dynamically changing environment to produce both immediate and long-term effects for individuals and demographic groups.

Fairness

Flexibly Fair Representation Learning by Disentanglement

no code implementations6 Jun 2019 Elliot Creager, David Madras, Jörn-Henrik Jacobsen, Marissa A. Weis, Kevin Swersky, Toniann Pitassi, Richard Zemel

We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes.

Disentanglement Fairness +1

Fairness Through Causal Awareness: Learning Latent-Variable Models for Biased Data

no code implementations7 Sep 2018 David Madras, Elliot Creager, Toniann Pitassi, Richard Zemel

Building on prior work in deep learning and generative modeling, we describe how to learn the parameters of this causal model from observational data alone, even in the presence of unobserved confounders.

Attribute Fairness +1

Explaining Image Classifiers by Counterfactual Generation

1 code implementation ICLR 2019 Chun-Hao Chang, Elliot Creager, Anna Goldenberg, David Duvenaud

We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision?

counterfactual Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.