Search Results for author: Vasilis Syrgkanis

Found 64 papers, 26 papers with code

Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity

1 code implementation10 Apr 2024 Vahid Balazadeh, Keertana Chidambaram, Viet Nguyen, Rahul G. Krishnan, Vasilis Syrgkanis

We study the problem of online sequential decision-making given auxiliary demonstrations from experts who made their decisions based on unobserved contextual information.

Decision Making Meta Reinforcement Learning +3

Regularized DeepIV with Model Selection

no code implementations7 Mar 2024 Zihao Li, Hui Lan, Vasilis Syrgkanis, Mengdi Wang, Masatoshi Uehara

In this paper, we study nonparametric estimation of instrumental variable (IV) regressions.

Model Selection regression

Structure-agnostic Optimality of Doubly Robust Learning for Treatment Effect Estimation

no code implementations22 Feb 2024 Jikai Jin, Vasilis Syrgkanis

Average treatment effect estimation is the most central problem in causal inference with application to numerous disciplines.

Causal Inference

Incentive-Aware Synthetic Control: Accurate Counterfactual Estimation via Incentivized Exploration

no code implementations26 Dec 2023 Daniel Ngo, Keegan Harris, Anish Agarwal, Vasilis Syrgkanis, Zhiwei Steven Wu

We consider the setting of synthetic control methods (SCMs), a canonical approach used to estimate the treatment effect on the treated in a panel data setting.

counterfactual valid

Adaptive Instrument Design for Indirect Experiments

no code implementations5 Dec 2023 Yash Chandak, Shiv Shankar, Vasilis Syrgkanis, Emma Brunskill

Indirect experiments provide a valuable framework for estimating treatment effects in situations where conducting randomized control trials (RCTs) is impractical or unethical.

Learning Causal Representations from General Environments: Identifiability and Intrinsic Ambiguity

no code implementations21 Nov 2023 Jikai Jin, Vasilis Syrgkanis

In this work, we provide the first identifiability results based on data that stem from general environments.

Representation Learning

Causal Q-Aggregation for CATE Model Selection

no code implementations25 Oct 2023 Hui Lan, Vasilis Syrgkanis

We provide regret rates for the major existing CATE ensembling approaches and propose a new CATE model ensembling approach based on Q-aggregation using the doubly robust loss.

Causal Inference Decision Making +1

Source Condition Double Robust Inference on Functionals of Inverse Problems

no code implementations25 Jul 2023 Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara

We consider estimation of parameters defined as linear functionals of solutions to linear inverse problems.

Inference on Optimal Dynamic Policies via Softmax Approximation

1 code implementation8 Mar 2023 Qizhao Chen, Morgane Austern, Vasilis Syrgkanis

Estimating optimal dynamic policies from offline data is a fundamental problem in dynamic decision making.

Causal Inference Decision Making +1

Post-Episodic Reinforcement Learning Inference

no code implementations17 Feb 2023 Vasilis Syrgkanis, Ruohan Zhan

Our goal is to be able to evaluate counterfactual adaptive policies after data collection and to estimate structural parameters such as dynamic treatment effects, which can be used for credit assignment (e. g. what was the effect of the first period action on the final outcome).

counterfactual Off-policy evaluation +1

Empirical Analysis of Model Selection for Heterogeneous Causal Effect Estimation

1 code implementation3 Nov 2022 Divyat Mahajan, Ioannis Mitliagkas, Brady Neal, Vasilis Syrgkanis

We study the problem of model selection in causal inference, specifically for the case of conditional average treatment effect (CATE) estimation under binary treatments.

AutoML Causal Inference +2

Synthetic Blip Effects: Generalizing Synthetic Controls for the Dynamic Treatment Regime

no code implementations20 Oct 2022 Anish Agarwal, Vasilis Syrgkanis

Our work avoids the combinatorial explosion in the number of units that would be required by a vanilla application of prior synthetic control and synthetic intervention methods in such dynamic treatment regime settings.

Partial Identification of Treatment Effects with Implicit Generative Models

1 code implementation14 Oct 2022 Vahid Balazadeh, Vasilis Syrgkanis, Rahul G. Krishnan

We propose a new method for partial identification of average treatment effects(ATEs) in general causal graphs using implicit generative models comprising continuous and discrete random variables.

Inference on Strongly Identified Functionals of Weakly Identified Functions

no code implementations17 Aug 2022 Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara

In a variety of applications, including nonparametric instrumental variable (NPIV) analysis, proximal causal inference under unmeasured confounding, and missing-not-at-random data with shadow variables, we are interested in inference on a continuous linear functional (e. g., average causal effects) of nuisance function (e. g., NPIV regression) defined by conditional moment restrictions.

Causal Inference regression +1

Debiased Machine Learning without Sample-Splitting for Stable Estimators

no code implementations3 Jun 2022 Qizhao Chen, Vasilis Syrgkanis, Morgane Austern

For instance, we show that the stability properties that we propose are satisfied for ensemble bagged estimators, built via sub-sampling without replacement, a popular technique in machine learning practice.

BIG-bench Machine Learning

Towards efficient representation identification in supervised learning

1 code implementation10 Apr 2022 Kartik Ahuja, Divyat Mahajan, Vasilis Syrgkanis, Ioannis Mitliagkas

In this work, we depart from these assumptions and ask: a) How can we get disentanglement when the auxiliary information does not provide conditional independence over the factors of variation?

Disentanglement

Long Story Short: Omitted Variable Bias in Causal Machine Learning

1 code implementation26 Dec 2021 Victor Chernozhukov, Carlos Cinelli, Whitney Newey, Amit Sharma, Vasilis Syrgkanis

Therefore, simple plausibility judgments on the maximum explanatory power of omitted variables (in explaining treatment and outcome variation) are sufficient to place overall bounds on the size of the bias.

BIG-bench Machine Learning Causal Inference

Double/Debiased Machine Learning for Dynamic Treatment Effects

no code implementations NeurIPS 2021 Greg Lewis, Vasilis Syrgkanis

We consider the estimation of treatment effects in settings when multiple treatments are assigned over time and treatments can have a causal effect on future outcomes.

BIG-bench Machine Learning

Asymptotics of the Bootstrap via Stability with Applications to Inference with Model Selection

no code implementations NeurIPS 2021 Morgane Austern, Vasilis Syrgkanis

One of the most commonly used methods for forming confidence intervals is the empirical bootstrap, which is especially expedient when the limiting distribution of the estimator is unknown.

BIG-bench Machine Learning Model Selection

Robust Generalized Method of Moments: A Finite Sample Viewpoint

no code implementations6 Oct 2021 Dhruv Rohatgi, Vasilis Syrgkanis

For many inference problems in statistics and econometrics, the unknown parameter is identified by a set of moment conditions.

Econometrics regression +1

DoWhy: Addressing Challenges in Expressing and Validating Causal Assumptions

1 code implementation27 Aug 2021 Amit Sharma, Vasilis Syrgkanis, Cheng Zhang, Emre Kiciman

Estimation of causal effects involves crucial assumptions about the data-generating process, such as directionality of effect, presence of instrumental variables or mediators, and whether all relevant confounders are observed.

Causal Discovery

Incentivizing Compliance with Algorithmic Instruments

1 code implementation21 Jul 2021 Daniel Ngo, Logan Stapleton, Vasilis Syrgkanis, Zhiwei Steven Wu

In rounds, a social planner interacts with a sequence of heterogeneous agents who arrive with their unobserved private type that determines both their prior preferences across the actions (e. g., control and treatment) and their baseline rewards without taking any treatment.

Selection bias

Knowledge Distillation as Semiparametric Inference

1 code implementation ICLR 2021 Tri Dao, Govinda M Kamath, Vasilis Syrgkanis, Lester Mackey

A popular approach to model compression is to train an inexpensive student model to mimic the class probabilities of a highly accurate but cumbersome teacher model.

Knowledge Distillation Model Compression

Estimating the Long-Term Effects of Novel Treatments

no code implementations NeurIPS 2021 Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Miruna Oprescu, Vasilis Syrgkanis

Policy makers typically face the problem of wanting to estimate the long-term effects of novel treatments, while only having historical data of older treatment options.

BIG-bench Machine Learning

Finding Subgroups with Significant Treatment Effects

no code implementations12 Mar 2021 Jann Spiess, Vasilis Syrgkanis, Victor Yaneng Wang

In this paper, we propose a machine-learning method that is specifically optimized for finding such subgroups in noisy data.

Adversarial Estimation of Riesz Representers

no code implementations30 Dec 2020 Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis

Furthermore, we use critical radius theory -- in place of Donsker theory -- to prove asymptotic normality without sample splitting, uncovering a ``complexity-rate robustness'' condition.

Asymptotics of the Empirical Bootstrap Method Beyond Asymptotic Normality

no code implementations23 Nov 2020 Morgane Austern, Vasilis Syrgkanis

One of the most commonly used methods for forming confidence intervals for statistical inference is the empirical bootstrap, which is especially expedient when the limiting distribution of the estimator is unknown.

Bid Prediction in Repeated Auctions with Learning

no code implementations26 Jul 2020 Gali Noti, Vasilis Syrgkanis

We consider the problem of bid prediction in repeated auctions and evaluate the performance of econometric methods for learning agents using a dataset from a mainstream sponsored search auction marketplace.

BIG-bench Machine Learning Econometrics +1

Estimation and Inference with Trees and Forests in High Dimensions

no code implementations7 Jul 2020 Vasilis Syrgkanis, Manolis Zampetakis

We prove that if only $r$ of the $d$ features are relevant for the mean outcome function, then shallow trees built greedily via the CART empirical MSE criterion achieve MSE rates that depend only logarithmically on the ambient dimension $d$.

regression valid +1

Minimax Estimation of Conditional Moment Models

1 code implementation NeurIPS 2020 Nishanth Dikkala, Greg Lewis, Lester Mackey, Vasilis Syrgkanis

We develop an approach for estimating models described via conditional moment restrictions, with a prototypical application being non-parametric instrumental variable regression.

Double/Debiased Machine Learning for Dynamic Treatment Effects via g-Estimation

no code implementations17 Feb 2020 Greg Lewis, Vasilis Syrgkanis

We consider the estimation of treatment effects in settings when multiple treatments are assigned over time and treatments can have a causal effect on future outcomes or the state of the treated unit.

BIG-bench Machine Learning Model Selection +1

Dynamically Aggregating Diverse Information

no code implementations15 Oct 2019 Annie Liang, Xiaosheng Mu, Vasilis Syrgkanis

An agent has access to multiple information sources, each of which provides information about a different attribute of an unknown state.

Attribute

Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments

2 code implementations NeurIPS 2019 Vasilis Syrgkanis, Victor Lei, Miruna Oprescu, Maggie Hei, Keith Battocchi, Greg Lewis

We develop a statistical learning approach to the estimation of heterogeneous effects, reducing the problem to the minimization of an appropriate loss function that depends on a set of auxiliary models (each corresponding to a separate prediction task).

BIG-bench Machine Learning

Semi-Parametric Efficient Policy Learning with Continuous Actions

1 code implementation NeurIPS 2019 Mert Demirer, Vasilis Syrgkanis, Greg Lewis, Victor Chernozhukov

Our results also apply if the model does not satisfy our semi-parametric form, but rather we measure regret in terms of the best projection of the true value function to this functional space.

Off-policy evaluation

Orthogonal Statistical Learning

3 code implementations25 Jan 2019 Dylan J. Foster, Vasilis Syrgkanis

We provide non-asymptotic excess risk guarantees for statistical learning in a setting where the population risk with respect to which we evaluate the target parameter depends on an unknown nuisance parameter that must be estimated from data.

Domain Adaptation

Non-Parametric Inference Adaptive to Intrinsic Dimension

1 code implementation11 Jan 2019 Khashayar Khosravi, Greg Lewis, Vasilis Syrgkanis

We show that if the intrinsic dimension of the covariate distribution is equal to $d$, then the finite sample estimation error of our estimator is of order $n^{-1/(d+2)}$ and our estimate is $n^{1/(d+2)}$-asymptotically normal, irrespective of $D$.

Regularized Orthogonal Machine Learning for Nonlinear Semiparametric Models

3 code implementations13 Jun 2018 Denis Nekipelov, Vira Semenova, Vasilis Syrgkanis

This paper proposes a Lasso-type estimator for a high-dimensional sparse parameter identified by a single index conditional moment restriction (CMR).

BIG-bench Machine Learning

Orthogonal Random Forest for Causal Inference

1 code implementation9 Jun 2018 Miruna Oprescu, Vasilis Syrgkanis, Zhiwei Steven Wu

We provide a consistency rate and establish asymptotic normality for our estimator.

Causal Inference

Adversarial Generalized Method of Moments

1 code implementation19 Mar 2018 Greg Lewis, Vasilis Syrgkanis

We provide an approach for learning deep neural net representations of models described via conditional moment restrictions.

Causal Inference Clustering

Semiparametric Contextual Bandits

2 code implementations ICML 2018 Akshay Krishnamurthy, Zhiwei Steven Wu, Vasilis Syrgkanis

This paper studies semiparametric contextual bandits, a generalization of the linear stochastic bandit problem where the reward for an action is modeled as a linear function of known action features confounded by an non-linear action-independent term.

Multi-Armed Bandits

Welfare Guarantees from Data

no code implementations NeurIPS 2017 Darrell Hoy, Denis Nekipelov, Vasilis Syrgkanis

The notion of the price of anarchy takes a worst-case stance to efficiency analysis, considering instance independent guarantees of efficiency.

Econometrics

Learning to Bid Without Knowing your Value

1 code implementation3 Nov 2017 Zhe Feng, Chara Podimata, Vasilis Syrgkanis

We address online learning in complex auction settings, such as sponsored search auctions, where the value of the bidder is unknown to her, evolving in an arbitrary manner and observed only if the bidder wins an allocation.

Orthogonal Machine Learning: Power and Limitations

1 code implementation ICML 2018 Lester Mackey, Vasilis Syrgkanis, Ilias Zadik

Double machine learning provides $\sqrt{n}$-consistent estimates of parameters of interest even when high-dimensional or nonparametric nuisance parameters are estimated at an $n^{-1/4}$ rate.

2k BIG-bench Machine Learning +2

Training GANs with Optimism

1 code implementation ICLR 2018 Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, Haoyang Zeng

Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs.

Inference on Auctions with Weak Assumptions on Information

no code implementations10 Oct 2017 Vasilis Syrgkanis, Elie Tamer, Juba Ziani

Given a sample of bids from independent auctions, this paper examines the question of inference on auction fundamentals (e. g. valuation distributions, welfare measures) under weak assumptions on information structure.

counterfactual Econometrics

Robust Optimization for Non-Convex Objectives

no code implementations NeurIPS 2017 Robert Chen, Brendan Lucier, Yaron Singer, Vasilis Syrgkanis

We consider robust optimization problems, where the goal is to optimize in the worst case over a class of objective functions.

Bayesian Optimization General Classification

A Proof of Orthogonal Double Machine Learning with $Z$-Estimators

no code implementations12 Apr 2017 Vasilis Syrgkanis

We consider two stage estimation with a non-parametric first stage and a generalized method of moments second stage, in a simpler setting than (Chernozhukov et al. 2016).

BIG-bench Machine Learning

A Sample Complexity Measure with Applications to Learning Optimal Auctions

no code implementations NeurIPS 2017 Vasilis Syrgkanis

We introduce a new sample complexity measure, which we refer to as split-sample growth rate.

Optimal and Myopic Information Acquisition

no code implementations18 Mar 2017 Annie Liang, Xiaosheng Mu, Vasilis Syrgkanis

We consider the problem of optimal dynamic information acquisition from many correlated information sources.

Oracle-Efficient Online Learning and Auction Design

no code implementations5 Nov 2016 Miroslav Dudík, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, Jennifer Wortman Vaughan

We consider the design of computationally efficient online learning algorithms in an adversarial setting in which the learner has access to an offline optimization oracle.

The Price of Anarchy in Auctions

no code implementations26 Jul 2016 Tim Roughgarden, Vasilis Syrgkanis, Eva Tardos

This survey outlines a general and modular theory for proving approximation guarantees for equilibria of auctions in complex settings.

Bayesian Exploration: Incentivizing Exploration in Bayesian Games

no code implementations24 Feb 2016 Yishay Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis, Zhiwei Steven Wu

As a key technical tool, we introduce the concept of explorable actions, the actions which some incentive-compatible policy can recommend with non-zero probability.

Efficient Algorithms for Adversarial Contextual Learning

no code implementations8 Feb 2016 Vasilis Syrgkanis, Akshay Krishnamurthy, Robert E. Schapire

We provide the first oracle efficient sublinear regret algorithms for adversarial versions of the contextual bandit problem.

Combinatorial Optimization

Learning in Auctions: Regret is Hard, Envy is Easy

no code implementations4 Nov 2015 Constantinos Daskalakis, Vasilis Syrgkanis

Our results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts is infinite, and the payoff function of the learner is non-linear.

Fast Convergence of Regularized Learning in Games

no code implementations NeurIPS 2015 Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, Robert E. Schapire

We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games.

No-Regret Learning in Bayesian Games

no code implementations NeurIPS 2015 Jason Hartline, Vasilis Syrgkanis, Eva Tardos

Recent price-of-anarchy analyses of games of complete information suggest that coarse correlated equilibria, which characterize outcomes resulting from no-regret learning dynamics, have near-optimal welfare.

Cannot find the paper you are looking for? You can Submit a new open access paper.