Search Results for author: Lalit Jain

Found 31 papers, 4 papers with code

Best of Three Worlds: Adaptive Experimentation for Digital Marketing in Practice

no code implementations16 Feb 2024 Tanner Fiez, Houssam Nassif, Yu-cheng Chen, Sergio Gamez, Lalit Jain

Adaptive experimental design (AED) methods are increasingly being used in industry as a tool to boost testing throughput or reduce experimentation cost relative to traditional A/B/N testing methods.

counterfactual Counterfactual Inference +2

DIRECT: Deep Active Learning under Imbalance and Label Noise

no code implementations14 Dec 2023 Shyam Nuggehalli, Jifan Zhang, Lalit Jain, Robert Nowak

Our results demonstrate that DIRECT can save more than 60% of the annotation budget compared to state-of-art active learning algorithms and more than 80% of annotation budget compared to random sampling.

Active Learning

Fair Active Learning in Low-Data Regimes

no code implementations13 Dec 2023 Romain Camilleri, Andrew Wagenmaker, Jamie Morgenstern, Lalit Jain, Kevin Jamieson

In this work, we address the challenges of reducing bias and improving accuracy in data-scarce environments, where the cost of collecting labeled data prohibits the use of large, labeled datasets.

Active Learning Fairness

Pessimistic Off-Policy Multi-Objective Optimization

no code implementations28 Oct 2023 Shima Alizadeh, Aniruddha Bhargava, Karthick Gopalswamy, Lalit Jain, Branislav Kveton, Ge Liu

The pessimistic estimator can be optimized by policy gradients and performs well in all of our experiments.

Decision Making

Minimax Optimal Submodular Optimization with Bandit Feedback

no code implementations27 Oct 2023 Artin Tajdini, Lalit Jain, Kevin Jamieson

The objective is to minimize the learner's regret over $T$ times with respect to ($1-e^{-1}$)-approximation of maximum $f(S_*)$ with $|S_*| = k$, obtained through greedy maximization of $f$.

Optimal Exploration is no harder than Thompson Sampling

no code implementations9 Oct 2023 Zhaoqi Li, Kevin Jamieson, Lalit Jain

In this work, we pose a natural question: is there an algorithm that can explore optimally and only needs the same computational primitives as Thompson Sampling?

Thompson Sampling

A/B Testing and Best-arm Identification for Linear Bandits with Robustness to Non-stationarity

1 code implementation27 Jul 2023 Zhihan Xiong, Romain Camilleri, Maryam Fazel, Lalit Jain, Kevin Jamieson

For robust identification, it is well-known that if arms are chosen randomly and non-adaptively from a G-optimal design over $\mathcal{X}$ at each time then the error probability decreases as $\exp(-T\Delta^2_{(1)}/d)$, where $\Delta_{(1)} = \min_{x \neq x^*} (x^* - x)^\top \frac{1}{T}\sum_{t=1}^T \theta_t$.

Adaptive Experimental Design and Counterfactual Inference

no code implementations25 Oct 2022 Tanner Fiez, Sergio Gamez, Arick Chen, Houssam Nassif, Lalit Jain

Adaptive experimental design methods are increasingly being used in industry as a tool to boost testing throughput or reduce experimentation cost relative to traditional A/B/N testing methods.

counterfactual Counterfactual Inference +1

Instance-optimal PAC Algorithms for Contextual Bandits

no code implementations5 Jul 2022 Zhaoqi Li, Lillian Ratliff, Houssam Nassif, Kevin Jamieson, Lalit Jain

In the stochastic contextual bandit setting, regret-minimizing algorithms have been extensively researched, but their instance-minimizing best-arm identification counterparts remain seldom studied.

Multi-Armed Bandits

Active Learning with Safety Constraints

no code implementations22 Jun 2022 Romain Camilleri, Andrew Wagenmaker, Jamie Morgenstern, Lalit Jain, Kevin Jamieson

To our knowledge, our results are the first on best-arm identification in linear bandits with safety constraints.

Active Learning Decision Making +1

An Experimental Design Approach for Regret Minimization in Logistic Bandits

no code implementations4 Feb 2022 Blake Mason, Kwang-Sung Jun, Lalit Jain

Finally, we discuss the impact of the bias of the MLE on the logistic bandit problem, providing an example where $d^2$ lower order regret (cf., it is $d$ for linear bandits) may not be improved as long as the MLE is used and how bias-corrected estimators may be used to make it closer to $d$.

Experimental Design

Nearly Optimal Algorithms for Level Set Estimation

no code implementations2 Nov 2021 Blake Mason, Romain Camilleri, Subhojyoti Mukherjee, Kevin Jamieson, Robert Nowak, Lalit Jain

The threshold value $\alpha$ can either be \emph{explicit} and provided a priori, or \emph{implicit} and defined relative to the optimal function value, i. e. $\alpha = (1-\epsilon)f(x_\ast)$ for a given $\epsilon > 0$ where $f(x_\ast)$ is the maximal function value and is unknown.

Experimental Design

Selective Sampling for Online Best-arm Identification

no code implementations NeurIPS 2021 Romain Camilleri, Zhihan Xiong, Maryam Fazel, Lalit Jain, Kevin Jamieson

The main results of this work precisely characterize this trade-off between labeled samples and stopping time and provide an algorithm that nearly-optimally achieves the minimal label complexity given a desired stopping time.

Binary Classification

Finding All $\epsilon$-Good Arms in Stochastic Bandits

no code implementations NeurIPS 2020 Blake Mason, Lalit Jain, Ardhendu Tripathy, Robert Nowak

The pure-exploration problem in stochastic multi-armed bandits aims to find one or more arms with the largest (or near largest) means.

Multi-Armed Bandits

Improved Confidence Bounds for the Linear Logistic Model and Applications to Linear Bandits

no code implementations23 Nov 2020 Kwang-Sung Jun, Lalit Jain, Blake Mason, Houssam Nassif

Specifically, our confidence bound avoids a direct dependence on $1/\kappa$, where $\kappa$ is the minimal variance over all arms' reward distributions.

Learning to Actively Learn: A Robust Approach

no code implementations29 Oct 2020 Jifan Zhang, Lalit Jain, Kevin Jamieson

Unlike the design of traditional adaptive algorithms that rely on concentration of measure and careful analysis to justify the correctness and sample complexity of the procedure, our adaptive algorithm is learned via adversarial training over equivalence classes of problems derived from information theoretic lower bounds.

Active Learning Meta-Learning +1

A New Perspective on Pool-Based Active Classification and False-Discovery Control

no code implementations NeurIPS 2019 Lalit Jain, Kevin Jamieson

In many scientific settings there is a need for adaptive experimental design to guide the process of identifying regions of the search space that contain as many true positives as possible subject to a low rate of false discoveries (i. e. false alarms).

Active Learning Binary Classification +3

Spectral Methods for Ranking with Scarce Data

no code implementations2 Jul 2020 Umang Varma, Lalit Jain, Anna C. Gilbert

In this paper we modify a popular and well studied method, RankCentrality for rank aggregation to account for few comparisons and that incorporates additional feature information.

Finding All ε-Good Arms in Stochastic Bandits

1 code implementation16 Jun 2020 Blake Mason, Lalit Jain, Ardhendu Tripathy, Robert Nowak

Mathematically, the all-{\epsilon}-good arm identification problem presents significant new challenges and surprises that do not arise in the pure-exploration objectives studied in the past.

Multi-Armed Bandits

Sequential Experimental Design for Transductive Linear Bandits

1 code implementation NeurIPS 2019 Tanner Fiez, Lalit Jain, Kevin Jamieson, Lillian Ratliff

Such a transductive setting naturally arises when the set of measurement vectors is limited due to factors such as availability or cost.

Drug Discovery Experimental Design +1

Convergence rates for ordinal embedding

no code implementations30 Apr 2019 Jordan S. Ellenberg, Lalit Jain

We prove optimal bounds for the convergence rate of ordinal embedding (also known as non-metric multidimensional scaling) in the 1-dimensional case.

A Bandit Approach to Sequential Experimental Design with False Discovery Control

no code implementations NeurIPS 2018 Kevin G. Jamieson, Lalit Jain

We propose a new adaptive sampling approach to multiple testing which aims to maximize statistical power while ensuring anytime false discovery control.

Drug Discovery Experimental Design +1

A Bandit Approach to Multiple Testing with False Discovery Control

no code implementations6 Sep 2018 Kevin Jamieson, Lalit Jain

We propose an adaptive sampling approach for multiple testing which aims to maximize statistical power while ensuring anytime false discovery control.

Drug Discovery

Firing Bandits: Optimizing Crowdfunding

no code implementations ICML 2018 Lalit Jain, Kevin Jamieson

In this paper, we model the problem of optimizing crowdfunding platforms, such as the non-profit Kiva or for-profit KickStarter, as a variant of the multi-armed bandit problem.

Adaptive Sampling for Coarse Ranking

1 code implementation20 Feb 2018 Sumeet Katariya, Lalit Jain, Nandana Sengupta, James Evans, Robert Nowak

We consider the problem of active coarse ranking, where the goal is to sort items according to their means into clusters of pre-specified sizes, by adaptively sampling from their reward distributions.

If it ain't broke, don't fix it: Sparse metric repair

no code implementations29 Oct 2017 Anna C. Gilbert, Lalit Jain

The distances between the data points are far from satisfying a metric.

Learning Low-Dimensional Metrics

no code implementations NeurIPS 2017 Lalit Jain, Blake Mason, Robert Nowak

This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax)bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric;4) we also bound the accuracy of the learned metric relative to the underlying true generative metric.

Metric Learning

Finite Sample Prediction and Recovery Bounds for Ordinal Embedding

no code implementations NeurIPS 2016 Lalit Jain, Kevin Jamieson, Robert Nowak

First, we derive prediction error bounds for ordinal embedding with noise by exploiting the fact that the rank of a distance matrix of points in $\mathbb{R}^d$ is at most $d+2$.

Cannot find the paper you are looking for? You can Submit a new open access paper.