Search Results for author: Mohsen Bayati

Found 27 papers, 7 papers with code

Can We Validate Counterfactual Estimations in the Presence of General Network Interference?

1 code implementation3 Feb 2025 Sadegh Shirani, Yuwei Luo, William Overman, Ruoxuan Xiong, Mohsen Bayati

In experimental settings with network interference, a unit's treatment can influence outcomes of other units, challenging both causal effect estimation and its validation.

Causal Inference counterfactual

Post Launch Evaluation of Policies in a High-Dimensional Setting

no code implementations30 Dec 2024 Shima Nassiri, Mohsen Bayati, Joe Cooprider

To address this, we propose a two-phase approach: first using nearest neighbor matching based on unit covariates to select similar control units, then applying supervised learning methods suitable for high-dimensional data to estimate counterfactual outcomes.

counterfactual

Higher-Order Causal Message Passing for Experimentation with Complex Interference

no code implementations1 Nov 2024 Mohsen Bayati, Yuwei Luo, William Overman, Sadegh Shirani, Ruoxuan Xiong

Our estimator draws on information from the sample mean and variance of unit outcomes and treatments over time, enabling efficient use of observed data to estimate the evolution of the system state.

Decision Making

Aligning Model Properties via Conformal Risk Control

no code implementations26 Jun 2024 William Overman, Jacqueline Jil Vallon, Mohsen Bayati

Specifically, we develop a general procedure for converting queries for testing a given property $\mathcal{P}$ to a collection of loss functions suitable for use in a conformal risk control algorithm.

model

A Probabilistic Approach for Model Alignment with Human Comparisons

no code implementations16 Mar 2024 Junyu Cao, Mohsen Bayati

The two-stage framework first learns low-dimensional representations from noisy-labeled data via an SL procedure and then uses human comparisons to improve the model alignment.

Causal Message Passing for Experiments with Unknown and General Network Interference

no code implementations14 Nov 2023 Sadegh Shirani, Mohsen Bayati

It is tailored for multi-period experiments and is particularly effective in settings with many units and prevalent network interference.

Experimental Design

Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits

no code implementations26 Jun 2023 Yuwei Luo, Mohsen Bayati

This methodology enables us to formulate an instance-dependent frequentist regret bound, which incorporates the geometric information, for a broad class of base algorithms, including Greedy, OFUL, and Thompson sampling.

Decision Making Thompson Sampling

Speed Up the Cold-Start Learning in Two-Sided Bandits with Many Arms

no code implementations1 Oct 2022 Mohsen Bayati, Junyu Cao, Wanning Chen

Next, we design two-phase bandit algorithms that first use subsampling and low-rank matrix estimation to obtain a substantially smaller targeted set of products and then apply a UCB procedure on the target products to find the best one.

Thompson Sampling Efficiently Learns to Control Diffusion Processes

no code implementations20 Jun 2022 Mohamad Kazem Shirani Faradonbeh, Mohamad Sadegh Shirani Faradonbeh, Mohsen Bayati

To the best of our knowledge, this is the first such result for Thompson sampling in a diffusion process control problem.

Decision Making Thompson Sampling

Learning to Recommend Using Non-Uniform Data

no code implementations21 Oct 2021 Wanning Chen, Mohsen Bayati

Utilizing this observation, we introduce a new optimization problem to select a weight matrix that minimizes the upper bound on the prediction error.

Fairness

The Elliptical Potential Lemma for General Distributions with an Application to Linear Thompson Sampling

no code implementations16 Feb 2021 Nima Hamidi, Mohsen Bayati

The elliptical potential lemma is a key tool for quantifying uncertainty in estimating parameters of the reward function, but it requires the noise and the prior distributions to be Gaussian.

Decision Making LEMMA +1

Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms

1 code implementation NeurIPS 2020 Mohsen Bayati, Nima Hamidi, Ramesh Johari, Khashayar Khosravi

We study the structure of regret-minimizing policies in the {\em many-armed} Bayesian multi-armed bandit problem: in particular, with $k$ the number of arms and $T$ the time horizon, we consider the case where $k \geq \sqrt{T}$.

Multi-Armed Bandits

On Frequentist Regret of Linear Thompson Sampling

no code implementations11 Jun 2020 Nima Hamidi, Mohsen Bayati

This paper studies the stochastic linear bandit problem, where a decision-maker chooses actions from possibly time-dependent sets of vectors in $\mathbb{R}^d$ and receives noisy rewards.

Thompson Sampling

Recommendation on a Budget: Column Space Recovery from Partially Observed Entries with Random or Active Sampling

no code implementations26 Feb 2020 Carolyn Kim, Mohsen Bayati

We analyze alternating minimization for column space recovery of a partially observed, approximately low rank matrix with a growing number of columns and a fixed budget of observations per column.

The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms

2 code implementations24 Feb 2020 Mohsen Bayati, Nima Hamidi, Ramesh Johari, Khashayar Khosravi

This finding diverges from the notion of free exploration, which relates to covariate variation, as recently discussed in contextual bandit literature.

Multi-Armed Bandits

A General Theory of the Stochastic Linear Bandit and Its Applications

no code implementations12 Feb 2020 Nima Hamidi, Mohsen Bayati

First, our new notion of optimism in expectation gives rise to a new algorithm, called sieved greedy (SG) that reduces the overexploration problem in OFUL.

Thompson Sampling

Personalizing Many Decisions with High-Dimensional Covariates

no code implementations NeurIPS 2019 Nima Hamidi, Mohsen Bayati, Kapil Gupta

We consider the k-armed stochastic contextual bandit problem with d dimensional features, when both k and d can be large.

Vocal Bursts Intensity Prediction

Optimal Experimental Design for Staggered Rollouts

1 code implementation9 Nov 2019 Ruoxuan Xiong, Susan Athey, Mohsen Bayati, Guido Imbens

Next, we study an adaptive experimental design problem, where both the decision to continue the experiment and treatment assignment decisions are updated after each period's data is collected.

Decision Making Experimental Design +1

On Low-rank Trace Regression under General Sampling Distribution

1 code implementation18 Apr 2019 Nima Hamidi, Mohsen Bayati

In this paper, we study the trace regression when a matrix of parameters B* is estimated via the convex relaxation of a rank-regularized regression or via regularized non-convex optimization.

Matrix Completion Multi-Task Learning +1

Ensemble Methods for Causal Effects in Panel Data Settings

no code implementations24 Mar 2019 Susan Athey, Mohsen Bayati, Guido Imbens, Zhaonan Qu

This paper studies a panel data setting where the goal is to estimate causal effects of an intervention by predicting the counterfactual values of outcomes for treated units, had they not received the treatment.

counterfactual Matrix Completion +1

Matrix Completion Methods for Causal Panel Data Models

2 code implementations27 Oct 2017 Susan Athey, Mohsen Bayati, Nikolay Doudchenko, Guido Imbens, Khashayar Khosravi

In this paper we study methods for estimating causal effects in settings with panel data, where some units are exposed to a treatment during some periods and the goal is estimating counterfactual (untreated) outcomes for the treated unit/period combinations.

Statistics Theory Econometrics Statistics Theory

Mostly Exploration-Free Algorithms for Contextual Bandits

1 code implementation28 Apr 2017 Hamsa Bastani, Mohsen Bayati, Khashayar Khosravi

We prove that this algorithm is rate optimal without any additional assumptions on the context distribution or the number of arms.

Diversity Thompson Sampling

Scaled Least Squares Estimator for GLMs in Large-Scale Problems

no code implementations NeurIPS 2016 Murat A. Erdogdu, Lee H. Dicker, Mohsen Bayati

We study the problem of efficiently estimating the coefficients of generalized linear models (GLMs) in the large-scale setting where the number of observations $n$ is much larger than the number of predictors $p$, i. e. $n\gg p \gg 1$.

Scalable Approximations for Generalized Linear Problems

no code implementations21 Nov 2016 Murat A. Erdogdu, Mohsen Bayati, Lee H. Dicker

Using this relation, we design an algorithm that achieves the same accuracy as the empirical risk minimizer through iterations that attain up to a cubic convergence rate, and that are cheaper than any batch optimization algorithm by at least a factor of $\mathcal{O}(p)$.

Binary Classification General Classification +2

Dynamic Pricing with Demand Covariates

no code implementations25 Apr 2016 Sheng Qiang, Mohsen Bayati

In particular, we assume that the firm knows the expected demand under a particular price from historical data, and in each period, before setting the price, the firm has access to extra information (demand covariates) which may be predictive of the demand.

Estimating LASSO Risk and Noise Level

no code implementations NeurIPS 2013 Mohsen Bayati, Murat A. Erdogdu, Andrea Montanari

In this context, we develop new estimators for the $\ell_2$ estimation risk $\|\hat{\theta}-\theta_0\|_2$ and the variance of the noise.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.