no code implementations • 12 Jan 2024 • Garud Iyengar, Raghav Singal
We model consumer behavior by a conversion funnel that captures the state of each consumer (e. g., interaction history with the firm) and allows the consumer behavior to vary as a function of both her state and firm's sequential interventions.
no code implementations • 2 Dec 2023 • Agostino Capponi, Garud Iyengar, Jay Sethuraman
Financial markets are undergoing an unprecedented transformation.
no code implementations • 23 Oct 2023 • Wonyoung Kim, Garud Iyengar, Assaf Zeevi
We propose a new regret minimization algorithm for episodic sparse linear Markov decision process (SMDP) where the state-transition distribution is a linear function of observed features.
no code implementations • 4 Aug 2023 • Madhumitha Shridharan, Garud Iyengar
We show that this LP can be significantly pruned, allowing us to compute bounds for significantly larger causal inference problems compared to existing techniques.
no code implementations • 16 Jun 2023 • Garud Iyengar, Henry Lam, Tianyu Wang
We develop a general bias correction approach, building on what we call Optimizer's Information Criterion (OIC), that directly approximates the first-order bias and does not require solving any additional optimization problems.
no code implementations • 31 May 2023 • Wonyoung Kim, Garud Iyengar, Assaf Zeevi
The sample complexity of our proposed algorithm is $\tilde{O}(d/\Delta^2)$, where $d$ is the dimension of contexts and $\Delta$ is a measure of problem complexity.
no code implementations • 31 Jan 2023 • Wonyoung Kim, Garud Iyengar, Assaf Zeevi
We consider the linear contextual multi-class multi-period packing problem (LMMP) where the goal is to pack items such that the total vector of consumption is below a given budget vector and the total value is as large as possible.
no code implementations • 3 Dec 2022 • Garud Iyengar, Henry Lam, Tianyu Wang
We propose a simple approach in which the distribution of random perturbations is approximated using a parametric family of distributions.
no code implementations • 25 Mar 2021 • Min-hwan Oh, Garud Iyengar
We propose upper confidence bound based algorithms for this MNL contextual bandit.
1 code implementation • 16 Jul 2020 • Min-hwan Oh, Garud Iyengar, Assaf Zeevi
We consider a stochastic contextual bandit problem where the dimension $d$ of the feature vectors is potentially large, however, only a sparse subset of features of cardinality $s_0 \ll d$ affect the reward function.
no code implementations • 18 May 2020 • Alkesh Yadav, Quentin Vagne, Pierre Sens, Garud Iyengar, Madan Rao
In this paper, we quantitatively analyse the tradeoffs between the number of cisternae and the number and specificity of enzymes, in order to synthesize a prescribed target glycan distribution of a certain complexity.
no code implementations • 22 Apr 2020 • Min-hwan Oh, Garud Iyengar
In order to construct a reliable anomaly detection method and take into consideration the confidence of the predicted anomaly score, we adopt a Bayesian approach for IRL.
1 code implementation • NeurIPS 2019 • Min-hwan Oh, Garud Iyengar
The feedback here is the item that the user picks from the assortment.
no code implementations • 31 Aug 2018 • Min-hwan Oh, Garud Iyengar
We study an exploration method for model-free RL that generalizes the counter-based exploration bonus methods and takes into account long term exploratory value of actions rather than a single step look-ahead.
no code implementations • 7 Aug 2018 • Francois Fagan, Garud Iyengar
Arguably the biggest challenge in applying neural networks is tuning the hyperparameters, in particular the learning rate.
no code implementations • ICLR 2018 • Francois Fagan, Garud Iyengar
Recent neural network and language models rely on softmax distributions with an extremely large number of categories.
no code implementations • 30 Sep 2014 • Necdet Serhat Aybat, Garud Iyengar, Zi Wang
We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other.
Optimization and Control
no code implementations • 11 May 2011 • Necdet Serhat Aybat, Donald Goldfarb, Garud Iyengar
The stable principal component pursuit (SPCP) problem is a non-smooth convex optimization problem, the solution of which has been shown both in theory and in practice to enable one to recover the low rank and sparse components of a matrix whose elements have been corrupted by Gaussian noise.
Optimization and Control