Search Results for author: Kirthevasan Kandasamy

Found 33 papers, 13 papers with code

Bandit Profit-maximization for Targeted Marketing

no code implementations3 Mar 2024 Joon Suk Huh, Ellen Vitercik, Kirthevasan Kandasamy

Specifically, we aim to maximize profit over an arbitrary sequence of multiple demand curves, each dependent on a distinct ancillary variable, but sharing the same price.

Marketing

Active Cost-aware Labeling of Streaming Data

no code implementations13 Apr 2023 Ting Cai, Kirthevasan Kandasamy

When the labeling cost is $B$, our algorithm, which chooses to label a point if the uncertainty is larger than a time and cost dependent threshold, achieves a worst-case upper bound of $\widetilde{O}(B^{\frac{1}{3}} K^{\frac{1}{3}} T^{\frac{2}{3}})$ on the loss after $T$ rounds.

Astronomy

Leveraging Reviews: Learning to Price with Buyer and Seller Uncertainty

no code implementations20 Feb 2023 Wenshuo Guo, Nika Haghtalab, Kirthevasan Kandasamy, Ellen Vitercik

Customers with few relevant reviews may hesitate to make a purchase except at a low price, so for the seller, there is a tension between setting high prices and ensuring that there are enough reviews so that buyers can confidently estimate their values.

PAC Best Arm Identification Under a Deadline

no code implementations6 Jun 2021 Brijen Thananjeyan, Kirthevasan Kandasamy, Ion Stoica, Michael I. Jordan, Ken Goldberg, Joseph E. Gonzalez

In this work, the decision-maker is given a deadline of $T$ rounds, where, on each round, it can adaptively choose which arms to pull and how many times to pull them; this distinguishes the number of decisions made (i. e., time or number of rounds) from the number of samples acquired (cost).

Online Learning Demands in Max-min Fairness

no code implementations15 Dec 2020 Kirthevasan Kandasamy, Gur-Eyal Sela, Joseph E Gonzalez, Michael I Jordan, Ion Stoica

We describe mechanisms for the allocation of a scarce resource among multiple users in a way that is efficient, fair, and strategy-proof, but when users do not know their resource requirements.

Fairness

VCG Mechanism Design with Unknown Agent Values under Stochastic Bandit Feedback

no code implementations19 Apr 2020 Kirthevasan Kandasamy, Joseph E. Gonzalez, Michael. I. Jordan, Ion Stoica

To that end, we first define three notions of regret for the welfare, the individual utilities of each agent and that of the mechanism.

Offline Contextual Bayesian Optimization

1 code implementation NeurIPS 2019 Ian Char, Youngseog Chung, Willie Neiswanger, Kirthevasan Kandasamy, Oak Nelson, Mark Boyer, Egemen Kolemen

In black-box optimization, an agent repeatedly chooses a configuration to test, so as to find an optimal configuration.

Bayesian Optimization

ChemBO: Bayesian Optimization of Small Organic Molecules with Synthesizable Recommendations

1 code implementation5 Aug 2019 Ksenia Korovina, Sailun Xu, Kirthevasan Kandasamy, Willie Neiswanger, Barnabas Poczos, Jeff Schneider, Eric P. Xing

In applications such as molecule design or drug discovery, it is desirable to have an algorithm which recommends new candidate molecules based on the results of past tests.

Bayesian Optimization Drug Discovery

Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly

1 code implementation15 Mar 2019 Kirthevasan Kandasamy, Karun Raju Vysyaraju, Willie Neiswanger, Biswajit Paria, Christopher R. Collins, Jeff Schneider, Barnabas Poczos, Eric P. Xing

We compare Dragonfly to a suite of other packages and algorithms for global optimisation and demonstrate that when the above methods are integrated, they enable significant improvements in the performance of BO.

Bayesian Optimisation

ProBO: Versatile Bayesian Optimization Using Any Probabilistic Programming Language

1 code implementation31 Jan 2019 Willie Neiswanger, Kirthevasan Kandasamy, Barnabas Poczos, Jeff Schneider, Eric Xing

Optimizing an expensive-to-query function is a common task in science and engineering, where it is beneficial to keep the number of queries to a minimum.

Bayesian Optimization Gaussian Processes +1

Noisy Blackbox Optimization with Multi-Fidelity Queries: A Tree Search Approach

1 code implementation24 Oct 2018 Rajat Sen, Kirthevasan Kandasamy, Sanjay Shakkottai

We study the problem of black-box optimization of a noisy function in the presence of low-cost approximations or fidelities, which is motivated by problems like hyper-parameter tuning.

Multi-Fidelity Black-Box Optimization with Hierarchical Partitions

no code implementations ICML 2018 Rajat Sen, Kirthevasan Kandasamy, Sanjay Shakkottai

Motivated by settings such as hyper-parameter tuning and physical simulations, we consider the problem of black-box optimization of a function.

Physical Simulations

A Flexible Framework for Multi-Objective Bayesian Optimization using Random Scalarizations

no code implementations30 May 2018 Biswajit Paria, Kirthevasan Kandasamy, Barnabás Póczos

We also study a notion of regret in the multi-objective setting and show that our strategy achieves sublinear regret.

Bayesian Optimization

Myopic Bayesian Design of Experiments via Posterior Sampling and Probabilistic Programming

1 code implementation25 May 2018 Kirthevasan Kandasamy, Willie Neiswanger, Reed Zhang, Akshay Krishnamurthy, Jeff Schneider, Barnabas Poczos

We design a new myopic strategy for a wide class of sequential design of experiment (DOE) problems, where the goal is to collect data in order to to fulfil a certain problem specific goal.

Multi-Armed Bandits Probabilistic Programming +2

Neural Architecture Search with Bayesian Optimisation and Optimal Transport

1 code implementation NeurIPS 2018 Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, Eric Xing

A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model.

Bayesian Optimisation BIG-bench Machine Learning +2

Asynchronous Parallel Bayesian Optimisation via Thompson Sampling

1 code implementation25 May 2017 Kirthevasan Kandasamy, Akshay Krishnamurthy, Jeff Schneider, Barnabas Poczos

We design and analyse variations of the classical Thompson sampling (TS) procedure for Bayesian optimisation (BO) in settings where function evaluations are expensive, but can be performed in parallel.

Bayesian Optimisation Thompson Sampling

Multi-fidelity Bayesian Optimisation with Continuous Approximations

no code implementations ICML 2017 Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabas Poczos

Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design.

Bayesian Optimisation

Batch Policy Gradient Methods for Improving Neural Conversation Models

no code implementations10 Feb 2017 Kirthevasan Kandasamy, Yoram Bachrach, Ryota Tomioka, Daniel Tarlow, David Carter

We study reinforcement learning of chatbots with recurrent neural network architectures when the rewards are noisy and expensive to obtain.

Chatbot Policy Gradient Methods +2

The Multi-fidelity Multi-armed Bandit

no code implementations NeurIPS 2016 Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabás Póczos

We study a variant of the classical stochastic $K$-armed bandit where observing the outcome of each arm is expensive, but cheap approximations to this outcome are available.

Additive Approximations in High Dimensional Nonparametric Regression via the SALSA

2 code implementations31 Jan 2016 Kirthevasan Kandasamy, Yao-Liang Yu

Between non-additive models which often have large variance and first order additive models which have large bias, there has been little work to exploit the trade-off in the middle via additive models of intermediate order.

Additive models regression +1

Nonparametric von Mises Estimators for Entropies, Divergences and Mutual Informations

no code implementations NeurIPS 2015 Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry Wasserman, James M. Robins

We propose and analyse estimators for statistical functionals of one or moredistributions under nonparametric assumptions. Our estimators are derived from the von Mises expansion andare based on the theory of influence functions, which appearin the semiparametric statistics literature. We show that estimators based either on data-splitting or a leave-one-out techniqueenjoy fast rates of convergence and other favorable theoretical properties. We apply this framework to derive estimators for several popular informationtheoretic quantities, and via empirical evaluation, show the advantage of thisapproach over existing estimators.

High Dimensional Bayesian Optimisation and Bandits via Additive Models

no code implementations5 Mar 2015 Kirthevasan Kandasamy, Jeff Schneider, Barnabas Poczos

We prove that, for additive functions the regret has only linear dependence on $D$ even though the function depends on all $D$ dimensions.

Additive models Bayesian Optimisation +2

On Estimating $L_2^2$ Divergence

no code implementations30 Oct 2014 Akshay Krishnamurthy, Kirthevasan Kandasamy, Barnabas Poczos, Larry Wasserman

We give a comprehensive theoretical characterization of a nonparametric estimator for the $L_2^2$ divergence between two continuous distributions.

Nonparametric Estimation of Renyi Divergence and Friends

no code implementations12 Feb 2014 Akshay Krishnamurthy, Kirthevasan Kandasamy, Barnabas Poczos, Larry Wasserman

We consider nonparametric estimation of $L_2$, Renyi-$\alpha$ and Tsallis-$\alpha$ divergences between continuous distributions.

Cannot find the paper you are looking for? You can Submit a new open access paper.