1 code implementation • 3 Feb 2025 • Sadegh Shirani, Yuwei Luo, William Overman, Ruoxuan Xiong, Mohsen Bayati
In experimental settings with network interference, a unit's treatment can influence outcomes of other units, challenging both causal effect estimation and its validation.
no code implementations • 30 Dec 2024 • Shima Nassiri, Mohsen Bayati, Joe Cooprider
To address this, we propose a two-phase approach: first using nearest neighbor matching based on unit covariates to select similar control units, then applying supervised learning methods suitable for high-dimensional data to estimate counterfactual outcomes.
no code implementations • 1 Nov 2024 • Mohsen Bayati, Yuwei Luo, William Overman, Sadegh Shirani, Ruoxuan Xiong
Our estimator draws on information from the sample mean and variance of unit outcomes and treatments over time, enabling efficient use of observed data to estimate the evolution of the system state.
no code implementations • 26 Jun 2024 • William Overman, Jacqueline Jil Vallon, Mohsen Bayati
Specifically, we develop a general procedure for converting queries for testing a given property $\mathcal{P}$ to a collection of loss functions suitable for use in a conformal risk control algorithm.
no code implementations • 16 Mar 2024 • Junyu Cao, Mohsen Bayati
The two-stage framework first learns low-dimensional representations from noisy-labeled data via an SL procedure and then uses human comparisons to improve the model alignment.
no code implementations • 14 Nov 2023 • Sadegh Shirani, Mohsen Bayati
It is tailored for multi-period experiments and is particularly effective in settings with many units and prevalent network interference.
no code implementations • 26 Jun 2023 • Yuwei Luo, Mohsen Bayati
This methodology enables us to formulate an instance-dependent frequentist regret bound, which incorporates the geometric information, for a broad class of base algorithms, including Greedy, OFUL, and Thompson sampling.
no code implementations • 1 Oct 2022 • Mohsen Bayati, Junyu Cao, Wanning Chen
Next, we design two-phase bandit algorithms that first use subsampling and low-rank matrix estimation to obtain a substantially smaller targeted set of products and then apply a UCB procedure on the target products to find the best one.
no code implementations • 20 Jun 2022 • Mohamad Kazem Shirani Faradonbeh, Mohamad Sadegh Shirani Faradonbeh, Mohsen Bayati
To the best of our knowledge, this is the first such result for Thompson sampling in a diffusion process control problem.
no code implementations • 21 Oct 2021 • Wanning Chen, Mohsen Bayati
Utilizing this observation, we introduce a new optimization problem to select a weight matrix that minimizes the upper bound on the prediction error.
no code implementations • 16 Feb 2021 • Nima Hamidi, Mohsen Bayati
The elliptical potential lemma is a key tool for quantifying uncertainty in estimating parameters of the reward function, but it requires the noise and the prior distributions to be Gaussian.
1 code implementation • NeurIPS 2020 • Mohsen Bayati, Nima Hamidi, Ramesh Johari, Khashayar Khosravi
We study the structure of regret-minimizing policies in the {\em many-armed} Bayesian multi-armed bandit problem: in particular, with $k$ the number of arms and $T$ the time horizon, we consider the case where $k \geq \sqrt{T}$.
no code implementations • 11 Jun 2020 • Nima Hamidi, Mohsen Bayati
This paper studies the stochastic linear bandit problem, where a decision-maker chooses actions from possibly time-dependent sets of vectors in $\mathbb{R}^d$ and receives noisy rewards.
no code implementations • 26 Feb 2020 • Carolyn Kim, Mohsen Bayati
We analyze alternating minimization for column space recovery of a partially observed, approximately low rank matrix with a growing number of columns and a fixed budget of observations per column.
2 code implementations • 24 Feb 2020 • Mohsen Bayati, Nima Hamidi, Ramesh Johari, Khashayar Khosravi
This finding diverges from the notion of free exploration, which relates to covariate variation, as recently discussed in contextual bandit literature.
no code implementations • 12 Feb 2020 • Nima Hamidi, Mohsen Bayati
First, our new notion of optimism in expectation gives rise to a new algorithm, called sieved greedy (SG) that reduces the overexploration problem in OFUL.
no code implementations • NeurIPS 2019 • Nima Hamidi, Mohsen Bayati, Kapil Gupta
We consider the k-armed stochastic contextual bandit problem with d dimensional features, when both k and d can be large.
1 code implementation • 9 Nov 2019 • Ruoxuan Xiong, Susan Athey, Mohsen Bayati, Guido Imbens
Next, we study an adaptive experimental design problem, where both the decision to continue the experiment and treatment assignment decisions are updated after each period's data is collected.
1 code implementation • 18 Apr 2019 • Nima Hamidi, Mohsen Bayati
In this paper, we study the trace regression when a matrix of parameters B* is estimated via the convex relaxation of a rank-regularized regression or via regularized non-convex optimization.
no code implementations • 24 Mar 2019 • Susan Athey, Mohsen Bayati, Guido Imbens, Zhaonan Qu
This paper studies a panel data setting where the goal is to estimate causal effects of an intervention by predicting the counterfactual values of outcomes for treated units, had they not received the treatment.
2 code implementations • 27 Oct 2017 • Susan Athey, Mohsen Bayati, Nikolay Doudchenko, Guido Imbens, Khashayar Khosravi
In this paper we study methods for estimating causal effects in settings with panel data, where some units are exposed to a treatment during some periods and the goal is estimating counterfactual (untreated) outcomes for the treated unit/period combinations.
Statistics Theory Econometrics Statistics Theory
1 code implementation • 28 Apr 2017 • Hamsa Bastani, Mohsen Bayati, Khashayar Khosravi
We prove that this algorithm is rate optimal without any additional assumptions on the context distribution or the number of arms.
no code implementations • NeurIPS 2016 • Murat A. Erdogdu, Lee H. Dicker, Mohsen Bayati
We study the problem of efficiently estimating the coefficients of generalized linear models (GLMs) in the large-scale setting where the number of observations $n$ is much larger than the number of predictors $p$, i. e. $n\gg p \gg 1$.
no code implementations • 21 Nov 2016 • Murat A. Erdogdu, Mohsen Bayati, Lee H. Dicker
Using this relation, we design an algorithm that achieves the same accuracy as the empirical risk minimizer through iterations that attain up to a cubic convergence rate, and that are cheaper than any batch optimization algorithm by at least a factor of $\mathcal{O}(p)$.
no code implementations • 25 Apr 2016 • Sheng Qiang, Mohsen Bayati
In particular, we assume that the firm knows the expected demand under a particular price from historical data, and in each period, before setting the price, the firm has access to extra information (demand covariates) which may be predictive of the demand.
no code implementations • NeurIPS 2013 • Mohsen Bayati, Murat A. Erdogdu, Andrea Montanari
In this context, we develop new estimators for the $\ell_2$ estimation risk $\|\hat{\theta}-\theta_0\|_2$ and the variance of the noise.
no code implementations • NeurIPS 2010 • Mohsen Bayati, José Pereira, Andrea Montanari
We consider the problem of learning a coefficient vector x0 from noisy linear observation y=Ax0+w.