5 code implementations • NeurIPS 2019 • Claudia Shi, David M. Blei, Victor Veitch
We propose two adaptations based on insights from the statistical literature on the estimation of treatment effects.
Ranked #2 on Causal Inference on IHDP
4 code implementations • 29 May 2019 • Victor Veitch, Dhanya Sridhar, David M. Blei
To address this challenge, we develop causally sufficient embeddings, low-dimensional document representations that preserve sufficient information for causal identification and allow for efficient estimation of causal effects.
1 code implementation • NAACL 2021 • Reid Pryzant, Dallas Card, Dan Jurafsky, Victor Veitch, Dhanya Sridhar
Second, in practice, we only have access to noisy proxies for the linguistic properties of interest -- e. g., predictions from classifiers and lexicons.
3 code implementations • NeurIPS 2019 • Victor Veitch, Yixin Wang, David M. Blei
We validate the method with experiments on a semi-synthetic social network dataset.
2 code implementations • 17 May 2022 • Irina Cristali, Victor Veitch
The main aim is to perform this adjustment nonparametrically, without functional form assumptions on either the process that generated the network or the treatment assignment and outcome processes.
1 code implementation • 7 Nov 2023 • Kiho Park, Yo Joong Choe, Victor Veitch
Using this causal inner product, we show how to unify all notions of linear representation.
1 code implementation • ICLR 2019 • Wenda Zhou, Victor Veitch, Morgane Austern, Ryan P. Adams, Peter Orbanz
Our main technical result is a generalization bound for compressed networks based on the compressed size.
2 code implementations • NeurIPS 2020 • Victor Veitch, Anisha Zaveri
The purpose of this paper is to develop \emph{Austen plots}, a sensitivity analysis tool to aid such judgments by making it easier to reason about potential bias induced by unobserved confounding.
1 code implementation • 16 Dec 2022 • Roman Pogodin, Namrata Deka, Yazhe Li, Danica J. Sutherland, Victor Veitch, Arthur Gretton
The procedure requires just a single ridge regression from $Y$ to kernelized features of $Z$, which can be done in advance.
3 code implementations • 1 Nov 2018 • Wesley Tansey, Victor Veitch, Haoran Zhang, Raul Rabadan, David M. Blei
We propose the holdout randomization test (HRT), an approach to feature selection using black box predictive models.
Methodology
1 code implementation • 27 Jun 2018 • Victor Veitch, Morgane Austern, Wenda Zhou, David M. Blei, Peter Orbanz
We solve this problem using recent ideas from graph sampling theory to (i) define an empirical risk for relational data and (ii) obtain stochastic gradients for this empirical risk that are automatically unbiased.
1 code implementation • 4 Jul 2022 • Yibo Jiang, Victor Veitch
In this paper, we study representation learning under a particular notion of domain shift that both respects causal invariance and that naturally handles the "anti-causal" structure.
1 code implementation • 24 Nov 2020 • Claudia Shi, Victor Veitch, David Blei
To address this challenge, practitioners collect and adjust for the covariates, hoping that they adequately correct for confounding.
1 code implementation • 2 Sep 2021 • Amir Feder, Katherine A. Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E. Roberts, Brandon M. Stewart, Victor Veitch, Diyi Yang
A fundamental goal of scientific research is to learn about causal relationships.
1 code implementation • 6 Dec 2017 • Victor Veitch, Ekansh Sharma, Zacharie Naulet, Daniel M. Roy
A variety of machine learning tasks---e. g., matrix factorization, topic modelling, and feature allocation---can be viewed as learning the parameters of a probability distribution over bipartite graphs.
1 code implementation • 5 Dec 2017 • Zacharie Naulet, Ekansh Sharma, Victor Veitch, Daniel M. Roy
Graphex processes resolve some pathologies in traditional random graph models, notably, providing models that are both projective and allow sparsity.
Statistics Theory Statistics Theory Primary 62F10, secondary 60G55, 60G70
1 code implementation • NeurIPS 2023 • ZiHao Wang, Lin Gui, Jeffrey Negrea, Victor Veitch
This suggests these models have internal representations that encode concepts in a `disentangled' manner.
no code implementations • 19 Jun 2020 • Jason Hartford, Victor Veitch, Dhanya Sridhar, Kevin Leyton-Brown
The technique is simple to apply and is "black-box" in the sense that it may be used with any instrumental variable estimator as long as the treatment effect is identified for each valid instrument independently.
no code implementations • 6 Nov 2020 • Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley
Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains.
no code implementations • NeurIPS 2021 • Victor Veitch, Alexander D'Amour, Steve Yadlowsky, Jacob Eisenstein
We introduce counterfactual invariance as a formalization of the requirement that changing irrelevant parts of the input shouldn't change model predictions.
no code implementations • NeurIPS 2021 • Victor Veitch, Alexander D'Amour, Steve Yadlowsky, Jacob Eisenstein
We introduce counterfactual invariance as a formalization of the requirement that changing irrelevant parts of the input shouldn't change model predictions.
no code implementations • 15 Aug 2022 • ZiHao Wang, Victor Veitch
Then, we characterize the causal structures that are compatible with this notion of invariance. With this in hand, we find conditions under which method-specific invariance notions correspond to real-world invariant structure, and we clarify the relationship between invariant structure and robustness to domain shifts.
no code implementations • 30 Sep 2022 • Lin Gui, Victor Veitch
To estimate a causal effect from observational data, we need to adjust for confounding aspects of the text that affect both the treatment and outcome -- e. g., the topic or writing level of the text.
1 code implementation • NeurIPS 2023 • Jacy Reese Anthis, Victor Veitch
This is an intuitive standard, as reflected in the U. S. legal system, but its use is limited because counterfactuals cannot be directly observed in real-world data.
no code implementations • 1 Feb 2024 • ZiHao Wang, Chirag Nagpal, Jonathan Berant, Jacob Eisenstein, Alex D'Amour, Sanmi Koyejo, Victor Veitch
A common approach for aligning language models to human preferences is to first learn a reward model from preference data, and then use this reward model to update the language model.
no code implementations • 6 Mar 2024 • Yibo Jiang, Goutham Rajendran, Pradeep Ravikumar, Bryon Aragam, Victor Veitch
To that end, we introduce a simple latent variable model to abstract and formalize the concept dynamics of the next token prediction.