no code implementations • 8 Mar 2023 • Qizhao Chen, Morgane Austern, Vasilis Syrgkanis
Estimating optimal dynamic policies from offline data is a fundamental problem in dynamic decision making.
no code implementations • 17 Feb 2023 • Vasilis Syrgkanis, Ruohan Zhan
Our goal is to be able to evaluate counterfactual adaptive policies after data collection and to estimate structural parameters such as dynamic treatment effects, which can be used for credit assignment (e. g. what was the effect of the first period action on the final outcome).
no code implementations • 10 Feb 2023 • Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara
In this paper, we study nonparametric estimation of instrumental variable (IV) regressions.
1 code implementation • 3 Nov 2022 • Divyat Mahajan, Ioannis Mitliagkas, Brady Neal, Vasilis Syrgkanis
Unlike model selection in machine learning, we cannot use the technique of cross-validation here as we do not observe the counterfactual potential outcome for any data point.
no code implementations • 20 Oct 2022 • Anish Agarwal, Vasilis Syrgkanis
Our work avoids the combinatorial explosion in the number of units that would be required by a vanilla application of prior synthetic control and synthetic intervention methods in such dynamic treatment regime settings.
1 code implementation • 14 Oct 2022 • Vahid Balazadeh, Vasilis Syrgkanis, Rahul G. Krishnan
We propose a new method for partial identification of average treatment effects(ATEs) in general causal graphs using implicit generative models comprising continuous and discrete random variables.
no code implementations • 17 Aug 2022 • Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara
In a variety of applications, including nonparametric instrumental variable (NPIV) analysis, proximal causal inference under unmeasured confounding, and missing-not-at-random data with shadow variables, we are interested in inference on a continuous linear functional (e. g., average causal effects) of nuisance function (e. g., NPIV regression) defined by conditional moment restrictions.
no code implementations • 3 Jun 2022 • Qizhao Chen, Vasilis Syrgkanis, Morgane Austern
For instance, we show that the stability properties that we propose are satisfied for ensemble bagged estimators, built via sub-sampling without replacement, a popular technique in machine learning practice.
1 code implementation • 10 Apr 2022 • Kartik Ahuja, Divyat Mahajan, Vasilis Syrgkanis, Ioannis Mitliagkas
In this work, we depart from these assumptions and ask: a) How can we get disentanglement when the auxiliary information does not provide conditional independence over the factors of variation?
no code implementations • 25 Mar 2022 • Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis
We extend the idea of automated debiased machine learning to the dynamic treatment regime and more generally to nested functionals.
no code implementations • 26 Dec 2021 • Victor Chernozhukov, Carlos Cinelli, Whitney Newey, Amit Sharma, Vasilis Syrgkanis
Therefore, simple plausibility judgments on the maximum explanatory power of omitted variables (in explaining treatment and outcome variation) are sufficient to place overall bounds on the size of the bias.
no code implementations • NeurIPS 2021 • Greg Lewis, Vasilis Syrgkanis
We consider the estimation of treatment effects in settings when multiple treatments are assigned over time and treatments can have a causal effect on future outcomes.
no code implementations • NeurIPS 2021 • Morgane Austern, Vasilis Syrgkanis
One of the most commonly used methods for forming confidence intervals is the empirical bootstrap, which is especially expedient when the limiting distribution of the estimator is unknown.
1 code implementation • 6 Oct 2021 • Victor Chernozhukov, Whitney K. Newey, Victor Quintas-Martinez, Vasilis Syrgkanis
We also propose a Random Forest method which learns a locally linear representation of the Riesz function.
no code implementations • 6 Oct 2021 • Dhruv Rohatgi, Vasilis Syrgkanis
For many inference problems in statistics and econometrics, the unknown parameter is identified by a set of moment conditions.
1 code implementation • 27 Aug 2021 • Amit Sharma, Vasilis Syrgkanis, Cheng Zhang, Emre Kiciman
Estimation of causal effects involves crucial assumptions about the data-generating process, such as directionality of effect, presence of instrumental variables or mediators, and whether all relevant confounders are observed.
1 code implementation • 21 Jul 2021 • Daniel Ngo, Logan Stapleton, Vasilis Syrgkanis, Zhiwei Steven Wu
In rounds, a social planner interacts with a sequence of heterogeneous agents who arrive with their unobserved private type that determines both their prior preferences across the actions (e. g., control and treatment) and their baseline rewards without taking any treatment.
1 code implementation • ICLR 2021 • Tri Dao, Govinda M Kamath, Vasilis Syrgkanis, Lester Mackey
A popular approach to model compression is to train an inexpensive student model to mimic the class probabilities of a highly accurate but cumbersome teacher model.
no code implementations • NeurIPS 2021 • Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Miruna Oprescu, Vasilis Syrgkanis
Policy makers typically face the problem of wanting to estimate the long-term effects of novel treatments, while only having historical data of older treatment options.
no code implementations • 12 Mar 2021 • Jann Spiess, Vasilis Syrgkanis
The past years have seen seen the development and deployment of machine-learning algorithms to estimate personalized treatment-assignment policies from randomized controlled trials.
no code implementations • 30 Dec 2020 • Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis
We provide an adversarial approach to estimating Riesz representers of linear functionals within arbitrary function spaces.
no code implementations • 23 Nov 2020 • Morgane Austern, Vasilis Syrgkanis
One of the most commonly used methods for forming confidence intervals for statistical inference is the empirical bootstrap, which is especially expedient when the limiting distribution of the estimator is unknown.
no code implementations • 26 Jul 2020 • Gali Noti, Vasilis Syrgkanis
We consider the problem of bid prediction in repeated auctions and evaluate the performance of econometric methods for learning agents using a dataset from a mainstream sponsored search auction marketplace.
no code implementations • 7 Jul 2020 • Vasilis Syrgkanis, Manolis Zampetakis
We prove that if only $r$ of the $d$ features are relevant for the mean outcome function, then shallow trees built greedily via the CART empirical MSE criterion achieve MSE rates that depend only logarithmically on the ambient dimension $d$.
1 code implementation • NeurIPS 2020 • Nishanth Dikkala, Greg Lewis, Lester Mackey, Vasilis Syrgkanis
We develop an approach for estimating models described via conditional moment restrictions, with a prototypical application being non-parametric instrumental variable regression.
no code implementations • 17 Feb 2020 • Greg Lewis, Vasilis Syrgkanis
We consider the estimation of treatment effects in settings when multiple treatments are assigned over time and treatments can have a causal effect on future outcomes or the state of the treated unit.
no code implementations • 15 Oct 2019 • Annie Liang, Xiaosheng Mu, Vasilis Syrgkanis
An agent has access to multiple information sources, each of which provides information about a different attribute of an unknown state.
1 code implementation • NeurIPS 2019 • Mert Demirer, Vasilis Syrgkanis, Greg Lewis, Victor Chernozhukov
Our results also apply if the model does not satisfy our semi-parametric form, but rather we measure regret in terms of the best projection of the true value function to this functional space.
2 code implementations • NeurIPS 2019 • Vasilis Syrgkanis, Victor Lei, Miruna Oprescu, Maggie Hei, Keith Battocchi, Greg Lewis
We develop a statistical learning approach to the estimation of heterogeneous effects, reducing the problem to the minimization of an appropriate loss function that depends on a set of auxiliary models (each corresponding to a separate prediction task).
2 code implementations • 25 Jan 2019 • Dylan J. Foster, Vasilis Syrgkanis
We provide non-asymptotic excess risk guarantees for statistical learning in a setting where the population risk with respect to which we evaluate the target parameter depends on an unknown nuisance parameter that must be estimated from data.
1 code implementation • 11 Jan 2019 • Khashayar Khosravi, Greg Lewis, Vasilis Syrgkanis
We show that if the intrinsic dimension of the covariate distribution is equal to $d$, then the finite sample estimation error of our estimator is of order $n^{-1/(d+2)}$ and our estimate is $n^{1/(d+2)}$-asymptotically normal, irrespective of $D$.
3 code implementations • 13 Jun 2018 • Denis Nekipelov, Vira Semenova, Vasilis Syrgkanis
This paper proposes a Lasso-type estimator for a high-dimensional sparse parameter identified by a single index conditional moment restriction (CMR).
1 code implementation • 9 Jun 2018 • Miruna Oprescu, Vasilis Syrgkanis, Zhiwei Steven Wu
We provide a consistency rate and establish asymptotic normality for our estimator.
1 code implementation • 19 Mar 2018 • Greg Lewis, Vasilis Syrgkanis
We provide an approach for learning deep neural net representations of models described via conditional moment restrictions.
1 code implementation • 13 Mar 2018 • Jimmy Wu, Diondra Peck, Scott Hsieh, Vandana Dialani, Constance D. Lehman, Bolei Zhou, Vasilis Syrgkanis, Lester Mackey, Genevieve Patterson
This work interprets the internal representations of deep neural networks trained for classification of diseased tissue in 2D mammograms.
2 code implementations • ICML 2018 • Akshay Krishnamurthy, Zhiwei Steven Wu, Vasilis Syrgkanis
This paper studies semiparametric contextual bandits, a generalization of the linear stochastic bandit problem where the reward for an action is modeled as a linear function of known action features confounded by an non-linear action-independent term.
1 code implementation • NeurIPS 2019 • Jonas Mueller, Vasilis Syrgkanis, Matt Taddy
We consider dynamic pricing with many products under an evolving but low-dimensional demand model.
1 code implementation • ICML 2018 • Yash Deshpande, Lester Mackey, Vasilis Syrgkanis, Matt Taddy
Estimators computed from adaptively collected data do not behave like their non-adaptive brethren.
no code implementations • NeurIPS 2017 • Darrell Hoy, Denis Nekipelov, Vasilis Syrgkanis
The notion of the price of anarchy takes a worst-case stance to efficiency analysis, considering instance independent guarantees of efficiency.
1 code implementation • 3 Nov 2017 • Zhe Feng, Chara Podimata, Vasilis Syrgkanis
We address online learning in complex auction settings, such as sponsored search auctions, where the value of the bidder is unknown to her, evolving in an arbitrary manner and observed only if the bidder wins an allocation.
1 code implementation • ICML 2018 • Lester Mackey, Vasilis Syrgkanis, Ilias Zadik
Double machine learning provides $\sqrt{n}$-consistent estimates of parameters of interest even when high-dimensional or nonparametric nuisance parameters are estimated at an $n^{-1/4}$ rate.
1 code implementation • ICLR 2018 • Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, Haoyang Zeng
Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs.
no code implementations • 10 Oct 2017 • Vasilis Syrgkanis, Elie Tamer, Juba Ziani
Given a sample of bids from independent auctions, this paper examines the question of inference on auction fundamentals (e. g. valuation distributions, welfare measures) under weak assumptions on information structure.
no code implementations • NeurIPS 2017 • Robert Chen, Brendan Lucier, Yaron Singer, Vasilis Syrgkanis
We consider robust optimization problems, where the goal is to optimize in the worst case over a class of objective functions.
no code implementations • 12 Apr 2017 • Vasilis Syrgkanis
We consider two stage estimation with a non-parametric first stage and a generalized method of moments second stage, in a simpler setting than (Chernozhukov et al. 2016).
no code implementations • NeurIPS 2017 • Vasilis Syrgkanis
We introduce a new sample complexity measure, which we refer to as split-sample growth rate.
no code implementations • 18 Mar 2017 • Annie Liang, Xiaosheng Mu, Vasilis Syrgkanis
We consider the problem of optimal dynamic information acquisition from many correlated information sources.
no code implementations • 5 Nov 2016 • Miroslav Dudík, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, Jennifer Wortman Vaughan
We consider the design of computationally efficient online learning algorithms in an adversarial setting in which the learner has access to an offline optimization oracle.
no code implementations • 26 Jul 2016 • Tim Roughgarden, Vasilis Syrgkanis, Eva Tardos
This survey outlines a general and modular theory for proving approximation guarantees for equilibria of auctions in complex settings.
no code implementations • NeurIPS 2016 • Vasilis Syrgkanis, Haipeng Luo, Akshay Krishnamurthy, Robert E. Schapire
We give an oracle-based algorithm for the adversarial contextual bandit problem, where either contexts are drawn i. i. d.
no code implementations • 24 Feb 2016 • Yishay Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis, Zhiwei Steven Wu
As a key technical tool, we introduce the concept of explorable actions, the actions which some incentive-compatible policy can recommend with non-zero probability.
no code implementations • 8 Feb 2016 • Vasilis Syrgkanis, Akshay Krishnamurthy, Robert E. Schapire
We provide the first oracle efficient sublinear regret algorithms for adversarial versions of the contextual bandit problem.
no code implementations • 4 Nov 2015 • Constantinos Daskalakis, Vasilis Syrgkanis
Our results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts is infinite, and the payoff function of the learner is non-linear.
no code implementations • NeurIPS 2015 • Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, Robert E. Schapire
We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games.
no code implementations • NeurIPS 2015 • Jason Hartline, Vasilis Syrgkanis, Eva Tardos
Recent price-of-anarchy analyses of games of complete information suggest that coarse correlated equilibria, which characterize outcomes resulting from no-regret learning dynamics, have near-optimal welfare.