no code implementations • 31 Jan 2024 • Zhenghao Zeng, David Arbour, Avi Feller, Raghavendra Addanki, Ryan Rossi, Ritwik Sinha, Edward H. Kennedy
Incorporating surrogates, which are fully observed post-treatment variables related to the primary outcome, can improve estimation in this case.
no code implementations • 29 Nov 2023 • Puja Trivedi, Ryan Rossi, David Arbour, Tong Yu, Franck Dernoncourt, Sungchul Kim, Nedim Lipka, Namyong Park, Nesreen K. Ahmed, Danai Koutra
Most real-world networks are noisy and incomplete samples from an unknown target distribution.
1 code implementation • 27 Aug 2023 • Shreyas Chaudhari, David Arbour, Georgios Theocharous, Nikos Vlassis
Prior work has developed estimators that leverage the structure in slates to estimate the expected off-policy performance, but the estimation of the entire performance distribution remains elusive.
1 code implementation • 12 Oct 2022 • Raghavendra Addanki, David Arbour, Tung Mai, Cameron Musco, Anup Rao
In particular, we study sample-constrained treatment effect estimation, where we must select a subset of $s \ll n$ individuals from the population to experiment on.
1 code implementation • 25 Aug 2022 • Ragib Ahsan, David Arbour, Elena Zheleva
We introduce relational acyclification, an operation specifically designed for relational models that enables reasoning about the identifiability of cyclic relational causal models.
1 code implementation • 30 Jun 2022 • Ragib Ahsan, Zahra Fatemi, David Arbour, Elena Zheleva
Independence testing plays a central role in statistical and causal inference from observational data.
no code implementations • 6 Jun 2022 • Vishwa Vinay, Manoj Kilaru, David Arbour
Search engines and recommendation systems attempt to continually improve the quality of the experience they afford to their users.
no code implementations • 5 Mar 2022 • Jaron J. R. Lee, David Arbour, Georgios Theocharous
Second, many recommendation systems are not probabilistic and so having access to logging and target policy densities may not be feasible.
no code implementations • 22 Feb 2022 • Ragib Ahsan, David Arbour, Elena Zheleva
To facilitate cycles in relational representation and learning, we introduce relational $\sigma$-separation, a new criterion for understanding relational systems with feedback loops.
1 code implementation • 30 Dec 2021 • Tong Mu, Georgios Theocharous, David Arbour, Emma Brunskill
Online reinforcement learning (RL) algorithms are often difficult to deploy in complex human-facing applications as they may learn slowly and have poor early performance.
2 code implementations • 11 Mar 2021 • Ian Waudby-Smith, David Arbour, Ritwik Sinha, Edward H. Kennedy, Aaditya Ramdas
This paper introduces time-uniform analogues of such asymptotic confidence intervals, adding to the literature on confidence sequences (CS) -- sequences of confidence intervals that are uniformly valid over time -- which provide valid inference at arbitrary stopping times and incur no penalties for "peeking" at the data, unlike classical confidence intervals which require the sample size to be fixed in advance.
no code implementations • 4 Feb 2021 • David Arbour, Drew Dimmery, Tung Mai, Anup Rao
We study the online discrepancy minimization problem for vectors in $\mathbb{R}^d$ in the oblivious setting where an adversary is allowed fix the vectors $x_1, x_2, \ldots, x_n$ in arbitrary order ahead of time.
Data Structures and Algorithms Discrete Mathematics Combinatorics
no code implementations • 23 Oct 2020 • Ryan A. Rossi, Nesreen K. Ahmed, Aldo Carranza, David Arbour, Anup Rao, Sungchul Kim, Eunyee Koh
Notably, since typed graphlet is more general than colored graphlet (and untyped graphlets), the counts of various typed graphlets can be combined to obtain the counts of the much simpler notion of colored graphlets.
1 code implementation • 21 Oct 2020 • David Arbour, Drew Dimmery, Anup Rao
In this work, we reframe the problem of balanced treatment assignment as optimization of a two-sample test between test and control units.
no code implementations • 21 Sep 2020 • Galen Weld, Peter West, Maria Glenski, David Arbour, Ryan Rossi, Tim Althoff
Across 648 experiments and two datasets, we evaluate every commonly used causal inference method and identify their strengths and weaknesses to inform social media researchers seeking to use such methods, and guide future improvements.
1 code implementation • 8 Sep 2020 • My Phan, David Arbour, Drew Dimmery, Anup B. Rao
To reduce the variance of our estimator, we design a covariate balance condition (Target Balance) between the treatment and control groups based on the target population.
Methodology
no code implementations • 2 Apr 2020 • Eli Sherman, David Arbour, Ilya Shpitser
In many applied fields, researchers are often interested in tailoring treatments to unit-level characteristics in order to optimize an outcome of interest.
no code implementations • 9 Jun 2019 • Arjun Sondhi, David Arbour, Drew Dimmery
We show that minimizing the risk of the classifier implies minimization of imbalance to the desired counterfactual distribution of state-action pairs.
no code implementations • 28 Jan 2019 • Ryan A. Rossi, Nesreen K. Ahmed, Aldo Carranza, David Arbour, Anup Rao, Sungchul Kim, Eunyee Koh
To address this problem, we propose a fast, parallel, and space-efficient framework for counting typed graphlets in large networks.
no code implementations • 26 Sep 2013 • Marc Maier, Katerina Marazopoulou, David Arbour, David Jensen
However, there is no equivalent complete algorithm for learning the structure of relational models, a more expressive generalization of Bayesian networks.